Test Report: KVM_Linux 17323

                    
                      c1ea47c43b7779cefdb242dbac2fab4b02ecdc60:2023-10-02:31265
                    
                

Test fail (2/318)

Order failed test Duration
214 TestMultiNode/serial/StartAfterStop 21.93
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.23
x
+
TestMultiNode/serial/StartAfterStop (21.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 node start m03 --alsologtostderr: exit status 90 (19.148561752s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-058614-m03 in cluster multinode-058614
	* Restarting existing kvm2 VM for "multinode-058614-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:53:07.730582  412553 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:53:07.730889  412553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:53:07.730903  412553 out.go:309] Setting ErrFile to fd 2...
	I1002 19:53:07.730909  412553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:53:07.731080  412553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:53:07.731373  412553 mustload.go:65] Loading cluster: multinode-058614
	I1002 19:53:07.731788  412553 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:53:07.732280  412553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.732336  412553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.747230  412553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I1002 19:53:07.747723  412553 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.748313  412553 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.748339  412553 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.748650  412553 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.748850  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetState
	W1002 19:53:07.750287  412553 host.go:58] "multinode-058614-m03" host status: Stopped
	I1002 19:53:07.752361  412553 out.go:177] * Starting worker node multinode-058614-m03 in cluster multinode-058614
	I1002 19:53:07.753645  412553 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:53:07.753691  412553 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 19:53:07.753706  412553 cache.go:57] Caching tarball of preloaded images
	I1002 19:53:07.753798  412553 preload.go:174] Found /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:53:07.753808  412553 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:53:07.753914  412553 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:53:07.754102  412553 start.go:365] acquiring machines lock for multinode-058614-m03: {Name:mk4eec10b828b68be104dfa4b7220ed2aea8b62b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:53:07.754174  412553 start.go:369] acquired machines lock for "multinode-058614-m03" in 32.109µs
	I1002 19:53:07.754189  412553 start.go:96] Skipping create...Using existing machine configuration
	I1002 19:53:07.754197  412553 fix.go:54] fixHost starting: m03
	I1002 19:53:07.754434  412553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.754462  412553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.769017  412553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1002 19:53:07.769404  412553 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.769889  412553 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.769913  412553 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.770239  412553 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.770419  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:07.770683  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetState
	I1002 19:53:07.772096  412553 fix.go:102] recreateIfNeeded on multinode-058614-m03: state=Stopped err=<nil>
	I1002 19:53:07.772133  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	W1002 19:53:07.772323  412553 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 19:53:07.774387  412553 out.go:177] * Restarting existing kvm2 VM for "multinode-058614-m03" ...
	I1002 19:53:07.775872  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .Start
	I1002 19:53:07.776058  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring networks are active...
	I1002 19:53:07.776707  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring network default is active
	I1002 19:53:07.776988  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring network mk-multinode-058614 is active
	I1002 19:53:07.777293  412553 main.go:141] libmachine: (multinode-058614-m03) Getting domain xml...
	I1002 19:53:07.777851  412553 main.go:141] libmachine: (multinode-058614-m03) Creating domain...
	I1002 19:53:09.017309  412553 main.go:141] libmachine: (multinode-058614-m03) Waiting to get IP...
	I1002 19:53:09.018249  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:09.018618  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has current primary IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:09.018677  412553 main.go:141] libmachine: (multinode-058614-m03) Found IP for machine: 192.168.39.119
	I1002 19:53:09.018707  412553 main.go:141] libmachine: (multinode-058614-m03) Reserving static IP address...
	I1002 19:53:09.019189  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "multinode-058614-m03", mac: "52:54:00:d6:2f:9d", ip: "192.168.39.119"} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:52:25 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:09.019240  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | skip adding static IP to network mk-multinode-058614 - found existing host DHCP lease matching {name: "multinode-058614-m03", mac: "52:54:00:d6:2f:9d", ip: "192.168.39.119"}
	I1002 19:53:09.019260  412553 main.go:141] libmachine: (multinode-058614-m03) Reserved static IP address: 192.168.39.119
	I1002 19:53:09.019275  412553 main.go:141] libmachine: (multinode-058614-m03) Waiting for SSH to be available...
	I1002 19:53:09.019290  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Getting to WaitForSSH function...
	I1002 19:53:09.021456  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:09.021798  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:52:25 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:09.021847  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:09.021946  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Using SSH client type: external
	I1002 19:53:09.021987  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa (-rw-------)
	I1002 19:53:09.022033  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:53:09.022045  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | About to run SSH command:
	I1002 19:53:09.022088  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | exit 0
	I1002 19:53:22.124151  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | SSH cmd err, output: <nil>: 
	I1002 19:53:22.124578  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetConfigRaw
	I1002 19:53:22.125367  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
	I1002 19:53:22.128196  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.128672  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.128737  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.128927  412553 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:53:22.129154  412553 machine.go:88] provisioning docker machine ...
	I1002 19:53:22.129176  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:22.129421  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
	I1002 19:53:22.129593  412553 buildroot.go:166] provisioning hostname "multinode-058614-m03"
	I1002 19:53:22.129619  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
	I1002 19:53:22.129873  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.131980  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.132379  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.132402  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.132572  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.132765  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.132967  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.133174  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.133394  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:22.133760  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:22.133777  412553 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-058614-m03 && echo "multinode-058614-m03" | sudo tee /etc/hostname
	I1002 19:53:22.284382  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-058614-m03
	
	I1002 19:53:22.284422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.287732  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.288199  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.288243  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.288422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.288614  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.288771  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.288910  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.289070  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:22.289398  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:22.289417  412553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-058614-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-058614-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-058614-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:53:22.411838  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:53:22.411866  412553 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17323-390762/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-390762/.minikube}
	I1002 19:53:22.411897  412553 buildroot.go:174] setting up certificates
	I1002 19:53:22.411907  412553 provision.go:83] configureAuth start
	I1002 19:53:22.411917  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
	I1002 19:53:22.412233  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
	I1002 19:53:22.415026  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.415527  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.415562  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.415704  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.418048  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.418401  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.418426  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.418550  412553 provision.go:138] copyHostCerts
	I1002 19:53:22.418607  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem, removing ...
	I1002 19:53:22.418623  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
	I1002 19:53:22.418687  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem (1078 bytes)
	I1002 19:53:22.418768  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem, removing ...
	I1002 19:53:22.418776  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
	I1002 19:53:22.418808  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem (1123 bytes)
	I1002 19:53:22.418933  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem, removing ...
	I1002 19:53:22.418948  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
	I1002 19:53:22.418986  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem (1675 bytes)
	I1002 19:53:22.419044  412553 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem org=jenkins.multinode-058614-m03 san=[192.168.39.119 192.168.39.119 localhost 127.0.0.1 minikube multinode-058614-m03]
	I1002 19:53:22.550695  412553 provision.go:172] copyRemoteCerts
	I1002 19:53:22.550761  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:53:22.550788  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.553530  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.553878  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.553913  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.554090  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.554321  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.554479  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.554590  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
	I1002 19:53:22.640477  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 19:53:22.666331  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 19:53:22.691259  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:53:22.715971  412553 provision.go:86] duration metric: configureAuth took 304.051587ms
	I1002 19:53:22.716001  412553 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:53:22.716258  412553 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:53:22.716290  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:22.716588  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.719275  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.719689  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.719772  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.719850  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.720027  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.720189  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.720359  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.720524  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:22.720855  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:22.720867  412553 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:53:22.836864  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:53:22.836892  412553 buildroot.go:70] root file system type: tmpfs
	I1002 19:53:22.837010  412553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:53:22.837047  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.840132  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.840497  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.840527  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.840738  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.840965  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.841177  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.841284  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.841482  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:22.841947  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:22.842072  412553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:53:22.968106  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:53:22.968146  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:22.971024  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.971422  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:22.971480  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:22.971644  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:22.971853  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.972023  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:22.972217  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:22.972378  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:22.972720  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:22.972754  412553 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:53:23.810268  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:53:23.810299  412553 machine.go:91] provisioned docker machine in 1.681127533s
	I1002 19:53:23.810313  412553 start.go:300] post-start starting for "multinode-058614-m03" (driver="kvm2")
	I1002 19:53:23.810331  412553 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:53:23.810360  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:23.810704  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:53:23.810739  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:23.813455  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:23.813850  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:23.813881  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:23.814045  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:23.814305  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:23.814478  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:23.814648  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
	I1002 19:53:23.900879  412553 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:53:23.905123  412553 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 19:53:23.905150  412553 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/addons for local assets ...
	I1002 19:53:23.905222  412553 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/files for local assets ...
	I1002 19:53:23.905305  412553 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> 3979952.pem in /etc/ssl/certs
	I1002 19:53:23.905414  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:53:23.913460  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /etc/ssl/certs/3979952.pem (1708 bytes)
	I1002 19:53:23.937280  412553 start.go:303] post-start completed in 126.950582ms
	I1002 19:53:23.937305  412553 fix.go:56] fixHost completed within 16.183111069s
	I1002 19:53:23.937328  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:23.939851  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:23.940263  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:23.940293  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:23.940422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:23.940648  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:23.940913  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:23.941111  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:23.941328  412553 main.go:141] libmachine: Using SSH client type: native
	I1002 19:53:23.941790  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1002 19:53:23.941808  412553 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 19:53:24.056181  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696276404.004633391
	
	I1002 19:53:24.056209  412553 fix.go:206] guest clock: 1696276404.004633391
	I1002 19:53:24.056221  412553 fix.go:219] Guest: 2023-10-02 19:53:24.004633391 +0000 UTC Remote: 2023-10-02 19:53:23.937309412 +0000 UTC m=+16.237846500 (delta=67.323979ms)
	I1002 19:53:24.056279  412553 fix.go:190] guest clock delta is within tolerance: 67.323979ms
	I1002 19:53:24.056290  412553 start.go:83] releasing machines lock for "multinode-058614-m03", held for 16.302104622s
	I1002 19:53:24.056320  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:24.056596  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
	I1002 19:53:24.059178  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.059648  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:24.059686  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.059806  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:24.060355  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:24.060537  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
	I1002 19:53:24.060624  412553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:53:24.060692  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:24.060805  412553 ssh_runner.go:195] Run: systemctl --version
	I1002 19:53:24.060837  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
	I1002 19:53:24.063214  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.063630  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:24.063663  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.063775  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.063814  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:24.064007  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:24.064144  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
	I1002 19:53:24.064179  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:24.064185  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
	I1002 19:53:24.064341  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
	I1002 19:53:24.064389  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
	I1002 19:53:24.064508  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
	I1002 19:53:24.064658  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
	I1002 19:53:24.064806  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
	I1002 19:53:24.174359  412553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:53:24.180235  412553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:53:24.180318  412553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:53:24.196678  412553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:53:24.196699  412553 start.go:469] detecting cgroup driver to use...
	I1002 19:53:24.196850  412553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:53:24.215886  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 19:53:24.226538  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:53:24.236808  412553 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:53:24.236854  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:53:24.247643  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:53:24.257712  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:53:24.267699  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:53:24.277892  412553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:53:24.288552  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:53:24.298600  412553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:53:24.307822  412553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:53:24.316506  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:53:24.420801  412553 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:53:24.437968  412553 start.go:469] detecting cgroup driver to use...
	I1002 19:53:24.438057  412553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:53:24.457642  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:53:24.469994  412553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:53:24.485468  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:53:24.497160  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:53:24.510821  412553 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 19:53:24.539724  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:53:24.553199  412553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:53:24.570343  412553 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:53:24.574485  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:53:24.583668  412553 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:53:24.599817  412553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:53:24.701468  412553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:53:24.817089  412553 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:53:24.817253  412553 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:53:24.834587  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:53:24.944078  412553 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:53:26.356933  412553 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.412801819s)
	I1002 19:53:26.357003  412553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:53:26.456140  412553 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:53:26.567722  412553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:53:26.693286  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:53:26.813638  412553 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:53:26.832629  412553 out.go:177] 
	W1002 19:53:26.833965  412553 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1002 19:53:26.833987  412553 out.go:239] * 
	* 
	W1002 19:53:26.838400  412553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 19:53:26.839592  412553 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1002 19:53:07.730582  412553 out.go:296] Setting OutFile to fd 1 ...
I1002 19:53:07.730889  412553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:53:07.730903  412553 out.go:309] Setting ErrFile to fd 2...
I1002 19:53:07.730909  412553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:53:07.731080  412553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:53:07.731373  412553 mustload.go:65] Loading cluster: multinode-058614
I1002 19:53:07.731788  412553 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:53:07.732280  412553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:53:07.732336  412553 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:53:07.747230  412553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
I1002 19:53:07.747723  412553 main.go:141] libmachine: () Calling .GetVersion
I1002 19:53:07.748313  412553 main.go:141] libmachine: Using API Version  1
I1002 19:53:07.748339  412553 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:53:07.748650  412553 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:53:07.748850  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetState
W1002 19:53:07.750287  412553 host.go:58] "multinode-058614-m03" host status: Stopped
I1002 19:53:07.752361  412553 out.go:177] * Starting worker node multinode-058614-m03 in cluster multinode-058614
I1002 19:53:07.753645  412553 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I1002 19:53:07.753691  412553 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I1002 19:53:07.753706  412553 cache.go:57] Caching tarball of preloaded images
I1002 19:53:07.753798  412553 preload.go:174] Found /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1002 19:53:07.753808  412553 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
I1002 19:53:07.753914  412553 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
I1002 19:53:07.754102  412553 start.go:365] acquiring machines lock for multinode-058614-m03: {Name:mk4eec10b828b68be104dfa4b7220ed2aea8b62b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1002 19:53:07.754174  412553 start.go:369] acquired machines lock for "multinode-058614-m03" in 32.109µs
I1002 19:53:07.754189  412553 start.go:96] Skipping create...Using existing machine configuration
I1002 19:53:07.754197  412553 fix.go:54] fixHost starting: m03
I1002 19:53:07.754434  412553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:53:07.754462  412553 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:53:07.769017  412553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
I1002 19:53:07.769404  412553 main.go:141] libmachine: () Calling .GetVersion
I1002 19:53:07.769889  412553 main.go:141] libmachine: Using API Version  1
I1002 19:53:07.769913  412553 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:53:07.770239  412553 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:53:07.770419  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:07.770683  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetState
I1002 19:53:07.772096  412553 fix.go:102] recreateIfNeeded on multinode-058614-m03: state=Stopped err=<nil>
I1002 19:53:07.772133  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
W1002 19:53:07.772323  412553 fix.go:128] unexpected machine state, will restart: <nil>
I1002 19:53:07.774387  412553 out.go:177] * Restarting existing kvm2 VM for "multinode-058614-m03" ...
I1002 19:53:07.775872  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .Start
I1002 19:53:07.776058  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring networks are active...
I1002 19:53:07.776707  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring network default is active
I1002 19:53:07.776988  412553 main.go:141] libmachine: (multinode-058614-m03) Ensuring network mk-multinode-058614 is active
I1002 19:53:07.777293  412553 main.go:141] libmachine: (multinode-058614-m03) Getting domain xml...
I1002 19:53:07.777851  412553 main.go:141] libmachine: (multinode-058614-m03) Creating domain...
I1002 19:53:09.017309  412553 main.go:141] libmachine: (multinode-058614-m03) Waiting to get IP...
I1002 19:53:09.018249  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:09.018618  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has current primary IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:09.018677  412553 main.go:141] libmachine: (multinode-058614-m03) Found IP for machine: 192.168.39.119
I1002 19:53:09.018707  412553 main.go:141] libmachine: (multinode-058614-m03) Reserving static IP address...
I1002 19:53:09.019189  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "multinode-058614-m03", mac: "52:54:00:d6:2f:9d", ip: "192.168.39.119"} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:52:25 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:09.019240  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | skip adding static IP to network mk-multinode-058614 - found existing host DHCP lease matching {name: "multinode-058614-m03", mac: "52:54:00:d6:2f:9d", ip: "192.168.39.119"}
I1002 19:53:09.019260  412553 main.go:141] libmachine: (multinode-058614-m03) Reserved static IP address: 192.168.39.119
I1002 19:53:09.019275  412553 main.go:141] libmachine: (multinode-058614-m03) Waiting for SSH to be available...
I1002 19:53:09.019290  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Getting to WaitForSSH function...
I1002 19:53:09.021456  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:09.021798  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:52:25 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:09.021847  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:09.021946  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Using SSH client type: external
I1002 19:53:09.021987  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa (-rw-------)
I1002 19:53:09.022033  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I1002 19:53:09.022045  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | About to run SSH command:
I1002 19:53:09.022088  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | exit 0
I1002 19:53:22.124151  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | SSH cmd err, output: <nil>: 
I1002 19:53:22.124578  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetConfigRaw
I1002 19:53:22.125367  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
I1002 19:53:22.128196  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.128672  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.128737  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.128927  412553 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
I1002 19:53:22.129154  412553 machine.go:88] provisioning docker machine ...
I1002 19:53:22.129176  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:22.129421  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
I1002 19:53:22.129593  412553 buildroot.go:166] provisioning hostname "multinode-058614-m03"
I1002 19:53:22.129619  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
I1002 19:53:22.129873  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.131980  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.132379  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.132402  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.132572  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.132765  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.132967  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.133174  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.133394  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:22.133760  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:22.133777  412553 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-058614-m03 && echo "multinode-058614-m03" | sudo tee /etc/hostname
I1002 19:53:22.284382  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-058614-m03

                                                
                                                
I1002 19:53:22.284422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.287732  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.288199  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.288243  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.288422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.288614  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.288771  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.288910  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.289070  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:22.289398  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:22.289417  412553 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-058614-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-058614-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-058614-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I1002 19:53:22.411838  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I1002 19:53:22.411866  412553 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17323-390762/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-390762/.minikube}
I1002 19:53:22.411897  412553 buildroot.go:174] setting up certificates
I1002 19:53:22.411907  412553 provision.go:83] configureAuth start
I1002 19:53:22.411917  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetMachineName
I1002 19:53:22.412233  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
I1002 19:53:22.415026  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.415527  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.415562  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.415704  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.418048  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.418401  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.418426  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.418550  412553 provision.go:138] copyHostCerts
I1002 19:53:22.418607  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem, removing ...
I1002 19:53:22.418623  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
I1002 19:53:22.418687  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem (1078 bytes)
I1002 19:53:22.418768  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem, removing ...
I1002 19:53:22.418776  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
I1002 19:53:22.418808  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem (1123 bytes)
I1002 19:53:22.418933  412553 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem, removing ...
I1002 19:53:22.418948  412553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
I1002 19:53:22.418986  412553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem (1675 bytes)
I1002 19:53:22.419044  412553 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem org=jenkins.multinode-058614-m03 san=[192.168.39.119 192.168.39.119 localhost 127.0.0.1 minikube multinode-058614-m03]
I1002 19:53:22.550695  412553 provision.go:172] copyRemoteCerts
I1002 19:53:22.550761  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1002 19:53:22.550788  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.553530  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.553878  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.553913  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.554090  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.554321  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.554479  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.554590  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
I1002 19:53:22.640477  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1002 19:53:22.666331  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I1002 19:53:22.691259  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1002 19:53:22.715971  412553 provision.go:86] duration metric: configureAuth took 304.051587ms
I1002 19:53:22.716001  412553 buildroot.go:189] setting minikube options for container-runtime
I1002 19:53:22.716258  412553 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:53:22.716290  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:22.716588  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.719275  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.719689  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.719772  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.719850  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.720027  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.720189  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.720359  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.720524  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:22.720855  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:22.720867  412553 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1002 19:53:22.836864  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I1002 19:53:22.836892  412553 buildroot.go:70] root file system type: tmpfs
I1002 19:53:22.837010  412553 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1002 19:53:22.837047  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.840132  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.840497  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.840527  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.840738  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.840965  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.841177  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.841284  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.841482  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:22.841947  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:22.842072  412553 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1002 19:53:22.968106  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I1002 19:53:22.968146  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:22.971024  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.971422  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:22.971480  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:22.971644  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:22.971853  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.972023  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:22.972217  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:22.972378  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:22.972720  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:22.972754  412553 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1002 19:53:23.810268  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I1002 19:53:23.810299  412553 machine.go:91] provisioned docker machine in 1.681127533s
I1002 19:53:23.810313  412553 start.go:300] post-start starting for "multinode-058614-m03" (driver="kvm2")
I1002 19:53:23.810331  412553 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1002 19:53:23.810360  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:23.810704  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1002 19:53:23.810739  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:23.813455  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:23.813850  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:23.813881  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:23.814045  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:23.814305  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:23.814478  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:23.814648  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
I1002 19:53:23.900879  412553 ssh_runner.go:195] Run: cat /etc/os-release
I1002 19:53:23.905123  412553 info.go:137] Remote host: Buildroot 2021.02.12
I1002 19:53:23.905150  412553 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/addons for local assets ...
I1002 19:53:23.905222  412553 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/files for local assets ...
I1002 19:53:23.905305  412553 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> 3979952.pem in /etc/ssl/certs
I1002 19:53:23.905414  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1002 19:53:23.913460  412553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /etc/ssl/certs/3979952.pem (1708 bytes)
I1002 19:53:23.937280  412553 start.go:303] post-start completed in 126.950582ms
I1002 19:53:23.937305  412553 fix.go:56] fixHost completed within 16.183111069s
I1002 19:53:23.937328  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:23.939851  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:23.940263  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:23.940293  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:23.940422  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:23.940648  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:23.940913  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:23.941111  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:23.941328  412553 main.go:141] libmachine: Using SSH client type: native
I1002 19:53:23.941790  412553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
I1002 19:53:23.941808  412553 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I1002 19:53:24.056181  412553 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696276404.004633391

                                                
                                                
I1002 19:53:24.056209  412553 fix.go:206] guest clock: 1696276404.004633391
I1002 19:53:24.056221  412553 fix.go:219] Guest: 2023-10-02 19:53:24.004633391 +0000 UTC Remote: 2023-10-02 19:53:23.937309412 +0000 UTC m=+16.237846500 (delta=67.323979ms)
I1002 19:53:24.056279  412553 fix.go:190] guest clock delta is within tolerance: 67.323979ms
I1002 19:53:24.056290  412553 start.go:83] releasing machines lock for "multinode-058614-m03", held for 16.302104622s
I1002 19:53:24.056320  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:24.056596  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetIP
I1002 19:53:24.059178  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.059648  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:24.059686  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.059806  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:24.060355  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:24.060537  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .DriverName
I1002 19:53:24.060624  412553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1002 19:53:24.060692  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:24.060805  412553 ssh_runner.go:195] Run: systemctl --version
I1002 19:53:24.060837  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHHostname
I1002 19:53:24.063214  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.063630  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:24.063663  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.063775  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.063814  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:24.064007  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:24.064144  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:2f:9d", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:53:20 +0000 UTC Type:0 Mac:52:54:00:d6:2f:9d Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-058614-m03 Clientid:01:52:54:00:d6:2f:9d}
I1002 19:53:24.064179  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:24.064185  412553 main.go:141] libmachine: (multinode-058614-m03) DBG | domain multinode-058614-m03 has defined IP address 192.168.39.119 and MAC address 52:54:00:d6:2f:9d in network mk-multinode-058614
I1002 19:53:24.064341  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHPort
I1002 19:53:24.064389  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
I1002 19:53:24.064508  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHKeyPath
I1002 19:53:24.064658  412553 main.go:141] libmachine: (multinode-058614-m03) Calling .GetSSHUsername
I1002 19:53:24.064806  412553 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m03/id_rsa Username:docker}
I1002 19:53:24.174359  412553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1002 19:53:24.180235  412553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1002 19:53:24.180318  412553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1002 19:53:24.196678  412553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1002 19:53:24.196699  412553 start.go:469] detecting cgroup driver to use...
I1002 19:53:24.196850  412553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1002 19:53:24.215886  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I1002 19:53:24.226538  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1002 19:53:24.236808  412553 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1002 19:53:24.236854  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1002 19:53:24.247643  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1002 19:53:24.257712  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1002 19:53:24.267699  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1002 19:53:24.277892  412553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1002 19:53:24.288552  412553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1002 19:53:24.298600  412553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1002 19:53:24.307822  412553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1002 19:53:24.316506  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 19:53:24.420801  412553 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1002 19:53:24.437968  412553 start.go:469] detecting cgroup driver to use...
I1002 19:53:24.438057  412553 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1002 19:53:24.457642  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1002 19:53:24.469994  412553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1002 19:53:24.485468  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1002 19:53:24.497160  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1002 19:53:24.510821  412553 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1002 19:53:24.539724  412553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1002 19:53:24.553199  412553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1002 19:53:24.570343  412553 ssh_runner.go:195] Run: which cri-dockerd
I1002 19:53:24.574485  412553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1002 19:53:24.583668  412553 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1002 19:53:24.599817  412553 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1002 19:53:24.701468  412553 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1002 19:53:24.817089  412553 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I1002 19:53:24.817253  412553 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1002 19:53:24.834587  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 19:53:24.944078  412553 ssh_runner.go:195] Run: sudo systemctl restart docker
I1002 19:53:26.356933  412553 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.412801819s)
I1002 19:53:26.357003  412553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1002 19:53:26.456140  412553 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1002 19:53:26.567722  412553 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1002 19:53:26.693286  412553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1002 19:53:26.813638  412553 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1002 19:53:26.832629  412553 out.go:177] 
W1002 19:53:26.833965  412553 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
W1002 19:53:26.833987  412553 out.go:239] * 
* 
W1002 19:53:26.838400  412553 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 19:53:26.839592  412553 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-058614 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 status: exit status 2 (585.5811ms)

                                                
                                                
-- stdout --
	multinode-058614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-058614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-058614-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-058614 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-058614 -n multinode-058614
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-058614 logs -n 25: (1.236161069s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-058614 cp multinode-058614:/home/docker/cp-test.txt                           | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | multinode-058614-m03:/home/docker/cp-test_multinode-058614_multinode-058614-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | multinode-058614 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n multinode-058614-m03 sudo cat                                   | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | /home/docker/cp-test_multinode-058614_multinode-058614-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp testdata/cp-test.txt                                                | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | multinode-058614-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | multinode-058614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3174959036/001/cp-test_multinode-058614-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:52 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614:/home/docker/cp-test_multinode-058614-m02_multinode-058614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n multinode-058614 sudo cat                                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | /home/docker/cp-test_multinode-058614-m02_multinode-058614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03:/home/docker/cp-test_multinode-058614-m02_multinode-058614-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n multinode-058614-m03 sudo cat                                   | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | /home/docker/cp-test_multinode-058614-m02_multinode-058614-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp testdata/cp-test.txt                                                | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3174959036/001/cp-test_multinode-058614-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614:/home/docker/cp-test_multinode-058614-m03_multinode-058614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n multinode-058614 sudo cat                                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | /home/docker/cp-test_multinode-058614-m03_multinode-058614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt                       | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m02:/home/docker/cp-test_multinode-058614-m03_multinode-058614-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n                                                                 | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | multinode-058614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-058614 ssh -n multinode-058614-m02 sudo cat                                   | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	|         | /home/docker/cp-test_multinode-058614-m03_multinode-058614-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-058614 node stop m03                                                          | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC | 02 Oct 23 19:53 UTC |
	| node    | multinode-058614 node start                                                             | multinode-058614 | jenkins | v1.31.2 | 02 Oct 23 19:53 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 19:49:58
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:49:58.202275  409972 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:49:58.202653  409972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:49:58.202707  409972 out.go:309] Setting ErrFile to fd 2...
	I1002 19:49:58.202732  409972 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:49:58.203304  409972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:49:58.204250  409972 out.go:303] Setting JSON to false
	I1002 19:49:58.205183  409972 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9149,"bootTime":1696267049,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:49:58.205245  409972 start.go:138] virtualization: kvm guest
	I1002 19:49:58.207242  409972 out.go:177] * [multinode-058614] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:49:58.209148  409972 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 19:49:58.209207  409972 notify.go:220] Checking for updates...
	I1002 19:49:58.210642  409972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:49:58.212116  409972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:49:58.213624  409972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:49:58.214988  409972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:49:58.216218  409972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:49:58.217584  409972 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:49:58.253564  409972 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 19:49:58.254833  409972 start.go:298] selected driver: kvm2
	I1002 19:49:58.254847  409972 start.go:902] validating driver "kvm2" against <nil>
	I1002 19:49:58.254858  409972 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:49:58.255859  409972 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:49:58.255967  409972 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17323-390762/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:49:58.270742  409972 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 19:49:58.270795  409972 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 19:49:58.270986  409972 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:49:58.271018  409972 cni.go:84] Creating CNI manager for ""
	I1002 19:49:58.271026  409972 cni.go:136] 0 nodes found, recommending kindnet
	I1002 19:49:58.271038  409972 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 19:49:58.271044  409972 start_flags.go:321] config:
	{Name:multinode-058614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:49:58.271156  409972 iso.go:125] acquiring lock: {Name:mkbfe48e1980de2c6c14998e378eaaa3f660e151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:49:58.272844  409972 out.go:177] * Starting control plane node multinode-058614 in cluster multinode-058614
	I1002 19:49:58.274095  409972 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:49:58.274131  409972 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 19:49:58.274156  409972 cache.go:57] Caching tarball of preloaded images
	I1002 19:49:58.274242  409972 preload.go:174] Found /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:49:58.274270  409972 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:49:58.275212  409972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:49:58.275249  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json: {Name:mk8b28fca0b2029c1999031d4c2d768b8a53ca8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:49:58.275409  409972 start.go:365] acquiring machines lock for multinode-058614: {Name:mk4eec10b828b68be104dfa4b7220ed2aea8b62b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:49:58.275461  409972 start.go:369] acquired machines lock for "multinode-058614" in 31.727µs
	I1002 19:49:58.275481  409972 start.go:93] Provisioning new machine with config: &{Name:multinode-058614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:49:58.275696  409972 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 19:49:58.277385  409972 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 19:49:58.277534  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:49:58.277573  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:49:58.291538  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41197
	I1002 19:49:58.291959  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:49:58.292474  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:49:58.292499  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:49:58.292867  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:49:58.293088  409972 main.go:141] libmachine: (multinode-058614) Calling .GetMachineName
	I1002 19:49:58.293261  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:49:58.293457  409972 start.go:159] libmachine.API.Create for "multinode-058614" (driver="kvm2")
	I1002 19:49:58.293492  409972 client.go:168] LocalClient.Create starting
	I1002 19:49:58.293519  409972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem
	I1002 19:49:58.293548  409972 main.go:141] libmachine: Decoding PEM data...
	I1002 19:49:58.293563  409972 main.go:141] libmachine: Parsing certificate...
	I1002 19:49:58.293614  409972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem
	I1002 19:49:58.293633  409972 main.go:141] libmachine: Decoding PEM data...
	I1002 19:49:58.293644  409972 main.go:141] libmachine: Parsing certificate...
	I1002 19:49:58.293661  409972 main.go:141] libmachine: Running pre-create checks...
	I1002 19:49:58.293676  409972 main.go:141] libmachine: (multinode-058614) Calling .PreCreateCheck
	I1002 19:49:58.294003  409972 main.go:141] libmachine: (multinode-058614) Calling .GetConfigRaw
	I1002 19:49:58.294387  409972 main.go:141] libmachine: Creating machine...
	I1002 19:49:58.294401  409972 main.go:141] libmachine: (multinode-058614) Calling .Create
	I1002 19:49:58.294534  409972 main.go:141] libmachine: (multinode-058614) Creating KVM machine...
	I1002 19:49:58.295710  409972 main.go:141] libmachine: (multinode-058614) DBG | found existing default KVM network
	I1002 19:49:58.296423  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:58.296250  409996 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478f0}
	I1002 19:49:58.301094  409972 main.go:141] libmachine: (multinode-058614) DBG | trying to create private KVM network mk-multinode-058614 192.168.39.0/24...
	I1002 19:49:58.374308  409972 main.go:141] libmachine: (multinode-058614) DBG | private KVM network mk-multinode-058614 192.168.39.0/24 created
	I1002 19:49:58.374346  409972 main.go:141] libmachine: (multinode-058614) Setting up store path in /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614 ...
	I1002 19:49:58.374373  409972 main.go:141] libmachine: (multinode-058614) Building disk image from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 19:49:58.374392  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:58.374314  409996 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:49:58.374526  409972 main.go:141] libmachine: (multinode-058614) Downloading /home/jenkins/minikube-integration/17323-390762/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 19:49:58.598512  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:58.598338  409996 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa...
	I1002 19:49:58.657349  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:58.657186  409996 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/multinode-058614.rawdisk...
	I1002 19:49:58.657382  409972 main.go:141] libmachine: (multinode-058614) DBG | Writing magic tar header
	I1002 19:49:58.657395  409972 main.go:141] libmachine: (multinode-058614) DBG | Writing SSH key tar header
	I1002 19:49:58.657403  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:58.657312  409996 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614 ...
	I1002 19:49:58.657418  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614
	I1002 19:49:58.657429  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines
	I1002 19:49:58.657482  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614 (perms=drwx------)
	I1002 19:49:58.657517  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines (perms=drwxr-xr-x)
	I1002 19:49:58.657533  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:49:58.657552  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762
	I1002 19:49:58.657561  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube (perms=drwxr-xr-x)
	I1002 19:49:58.657568  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 19:49:58.657578  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home/jenkins
	I1002 19:49:58.657584  409972 main.go:141] libmachine: (multinode-058614) DBG | Checking permissions on dir: /home
	I1002 19:49:58.657597  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762 (perms=drwxrwxr-x)
	I1002 19:49:58.657611  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 19:49:58.657624  409972 main.go:141] libmachine: (multinode-058614) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 19:49:58.657638  409972 main.go:141] libmachine: (multinode-058614) Creating domain...
	I1002 19:49:58.657654  409972 main.go:141] libmachine: (multinode-058614) DBG | Skipping /home - not owner
	I1002 19:49:58.658787  409972 main.go:141] libmachine: (multinode-058614) define libvirt domain using xml: 
	I1002 19:49:58.658817  409972 main.go:141] libmachine: (multinode-058614) <domain type='kvm'>
	I1002 19:49:58.658830  409972 main.go:141] libmachine: (multinode-058614)   <name>multinode-058614</name>
	I1002 19:49:58.658859  409972 main.go:141] libmachine: (multinode-058614)   <memory unit='MiB'>2200</memory>
	I1002 19:49:58.658874  409972 main.go:141] libmachine: (multinode-058614)   <vcpu>2</vcpu>
	I1002 19:49:58.658887  409972 main.go:141] libmachine: (multinode-058614)   <features>
	I1002 19:49:58.658903  409972 main.go:141] libmachine: (multinode-058614)     <acpi/>
	I1002 19:49:58.658917  409972 main.go:141] libmachine: (multinode-058614)     <apic/>
	I1002 19:49:58.658930  409972 main.go:141] libmachine: (multinode-058614)     <pae/>
	I1002 19:49:58.658944  409972 main.go:141] libmachine: (multinode-058614)     
	I1002 19:49:58.658958  409972 main.go:141] libmachine: (multinode-058614)   </features>
	I1002 19:49:58.658971  409972 main.go:141] libmachine: (multinode-058614)   <cpu mode='host-passthrough'>
	I1002 19:49:58.659004  409972 main.go:141] libmachine: (multinode-058614)   
	I1002 19:49:58.659019  409972 main.go:141] libmachine: (multinode-058614)   </cpu>
	I1002 19:49:58.659026  409972 main.go:141] libmachine: (multinode-058614)   <os>
	I1002 19:49:58.659035  409972 main.go:141] libmachine: (multinode-058614)     <type>hvm</type>
	I1002 19:49:58.659072  409972 main.go:141] libmachine: (multinode-058614)     <boot dev='cdrom'/>
	I1002 19:49:58.659095  409972 main.go:141] libmachine: (multinode-058614)     <boot dev='hd'/>
	I1002 19:49:58.659111  409972 main.go:141] libmachine: (multinode-058614)     <bootmenu enable='no'/>
	I1002 19:49:58.659127  409972 main.go:141] libmachine: (multinode-058614)   </os>
	I1002 19:49:58.659142  409972 main.go:141] libmachine: (multinode-058614)   <devices>
	I1002 19:49:58.659153  409972 main.go:141] libmachine: (multinode-058614)     <disk type='file' device='cdrom'>
	I1002 19:49:58.659177  409972 main.go:141] libmachine: (multinode-058614)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/boot2docker.iso'/>
	I1002 19:49:58.659195  409972 main.go:141] libmachine: (multinode-058614)       <target dev='hdc' bus='scsi'/>
	I1002 19:49:58.659206  409972 main.go:141] libmachine: (multinode-058614)       <readonly/>
	I1002 19:49:58.659216  409972 main.go:141] libmachine: (multinode-058614)     </disk>
	I1002 19:49:58.659228  409972 main.go:141] libmachine: (multinode-058614)     <disk type='file' device='disk'>
	I1002 19:49:58.659241  409972 main.go:141] libmachine: (multinode-058614)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 19:49:58.659284  409972 main.go:141] libmachine: (multinode-058614)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/multinode-058614.rawdisk'/>
	I1002 19:49:58.659311  409972 main.go:141] libmachine: (multinode-058614)       <target dev='hda' bus='virtio'/>
	I1002 19:49:58.659325  409972 main.go:141] libmachine: (multinode-058614)     </disk>
	I1002 19:49:58.659335  409972 main.go:141] libmachine: (multinode-058614)     <interface type='network'>
	I1002 19:49:58.659349  409972 main.go:141] libmachine: (multinode-058614)       <source network='mk-multinode-058614'/>
	I1002 19:49:58.659362  409972 main.go:141] libmachine: (multinode-058614)       <model type='virtio'/>
	I1002 19:49:58.659374  409972 main.go:141] libmachine: (multinode-058614)     </interface>
	I1002 19:49:58.659388  409972 main.go:141] libmachine: (multinode-058614)     <interface type='network'>
	I1002 19:49:58.659403  409972 main.go:141] libmachine: (multinode-058614)       <source network='default'/>
	I1002 19:49:58.659425  409972 main.go:141] libmachine: (multinode-058614)       <model type='virtio'/>
	I1002 19:49:58.659454  409972 main.go:141] libmachine: (multinode-058614)     </interface>
	I1002 19:49:58.659472  409972 main.go:141] libmachine: (multinode-058614)     <serial type='pty'>
	I1002 19:49:58.659486  409972 main.go:141] libmachine: (multinode-058614)       <target port='0'/>
	I1002 19:49:58.659497  409972 main.go:141] libmachine: (multinode-058614)     </serial>
	I1002 19:49:58.659510  409972 main.go:141] libmachine: (multinode-058614)     <console type='pty'>
	I1002 19:49:58.659524  409972 main.go:141] libmachine: (multinode-058614)       <target type='serial' port='0'/>
	I1002 19:49:58.659538  409972 main.go:141] libmachine: (multinode-058614)     </console>
	I1002 19:49:58.659552  409972 main.go:141] libmachine: (multinode-058614)     <rng model='virtio'>
	I1002 19:49:58.659568  409972 main.go:141] libmachine: (multinode-058614)       <backend model='random'>/dev/random</backend>
	I1002 19:49:58.659580  409972 main.go:141] libmachine: (multinode-058614)     </rng>
	I1002 19:49:58.659592  409972 main.go:141] libmachine: (multinode-058614)     
	I1002 19:49:58.659610  409972 main.go:141] libmachine: (multinode-058614)     
	I1002 19:49:58.659645  409972 main.go:141] libmachine: (multinode-058614)   </devices>
	I1002 19:49:58.659672  409972 main.go:141] libmachine: (multinode-058614) </domain>
	I1002 19:49:58.659688  409972 main.go:141] libmachine: (multinode-058614) 
	I1002 19:49:58.663351  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:76:77:b5 in network default
	I1002 19:49:58.663912  409972 main.go:141] libmachine: (multinode-058614) Ensuring networks are active...
	I1002 19:49:58.663935  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:49:58.664599  409972 main.go:141] libmachine: (multinode-058614) Ensuring network default is active
	I1002 19:49:58.664907  409972 main.go:141] libmachine: (multinode-058614) Ensuring network mk-multinode-058614 is active
	I1002 19:49:58.665352  409972 main.go:141] libmachine: (multinode-058614) Getting domain xml...
	I1002 19:49:58.665963  409972 main.go:141] libmachine: (multinode-058614) Creating domain...
	I1002 19:49:59.876688  409972 main.go:141] libmachine: (multinode-058614) Waiting to get IP...
	I1002 19:49:59.877577  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:49:59.877983  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:49:59.878016  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:49:59.877955  409996 retry.go:31] will retry after 255.671907ms: waiting for machine to come up
	I1002 19:50:00.135723  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:00.136335  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:00.136370  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:00.136246  409996 retry.go:31] will retry after 352.035915ms: waiting for machine to come up
	I1002 19:50:00.489779  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:00.490239  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:00.490266  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:00.490191  409996 retry.go:31] will retry after 354.612504ms: waiting for machine to come up
	I1002 19:50:00.846815  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:00.847296  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:00.847319  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:00.847238  409996 retry.go:31] will retry after 514.29209ms: waiting for machine to come up
	I1002 19:50:01.363145  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:01.363711  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:01.363750  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:01.363638  409996 retry.go:31] will retry after 657.973733ms: waiting for machine to come up
	I1002 19:50:02.023540  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:02.024075  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:02.024155  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:02.024065  409996 retry.go:31] will retry after 825.216008ms: waiting for machine to come up
	I1002 19:50:02.850919  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:02.851399  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:02.851431  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:02.851348  409996 retry.go:31] will retry after 1.148799489s: waiting for machine to come up
	I1002 19:50:04.001458  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:04.002042  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:04.002079  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:04.001990  409996 retry.go:31] will retry after 1.073670146s: waiting for machine to come up
	I1002 19:50:05.077700  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:05.078134  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:05.078162  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:05.078068  409996 retry.go:31] will retry after 1.171770049s: waiting for machine to come up
	I1002 19:50:06.251085  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:06.251586  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:06.251624  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:06.251528  409996 retry.go:31] will retry after 1.482092892s: waiting for machine to come up
	I1002 19:50:07.736510  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:07.736958  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:07.736994  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:07.736907  409996 retry.go:31] will retry after 1.85284522s: waiting for machine to come up
	I1002 19:50:09.592226  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:09.592906  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:09.592942  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:09.592797  409996 retry.go:31] will retry after 2.585646505s: waiting for machine to come up
	I1002 19:50:12.181600  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:12.182124  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:12.182147  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:12.182083  409996 retry.go:31] will retry after 4.171722964s: waiting for machine to come up
	I1002 19:50:16.358079  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:16.358471  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find current IP address of domain multinode-058614 in network mk-multinode-058614
	I1002 19:50:16.358495  409972 main.go:141] libmachine: (multinode-058614) DBG | I1002 19:50:16.358418  409996 retry.go:31] will retry after 3.613490172s: waiting for machine to come up
	I1002 19:50:19.974039  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:19.974441  409972 main.go:141] libmachine: (multinode-058614) Found IP for machine: 192.168.39.83
	I1002 19:50:19.974477  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has current primary IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:19.974490  409972 main.go:141] libmachine: (multinode-058614) Reserving static IP address...
	I1002 19:50:19.974961  409972 main.go:141] libmachine: (multinode-058614) DBG | unable to find host DHCP lease matching {name: "multinode-058614", mac: "52:54:00:c7:90:6b", ip: "192.168.39.83"} in network mk-multinode-058614
	I1002 19:50:20.049561  409972 main.go:141] libmachine: (multinode-058614) DBG | Getting to WaitForSSH function...
	I1002 19:50:20.049598  409972 main.go:141] libmachine: (multinode-058614) Reserved static IP address: 192.168.39.83
	I1002 19:50:20.049628  409972 main.go:141] libmachine: (multinode-058614) Waiting for SSH to be available...
	I1002 19:50:20.052042  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.052385  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.052421  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.052582  409972 main.go:141] libmachine: (multinode-058614) DBG | Using SSH client type: external
	I1002 19:50:20.052607  409972 main.go:141] libmachine: (multinode-058614) DBG | Using SSH private key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa (-rw-------)
	I1002 19:50:20.052645  409972 main.go:141] libmachine: (multinode-058614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:50:20.052676  409972 main.go:141] libmachine: (multinode-058614) DBG | About to run SSH command:
	I1002 19:50:20.052717  409972 main.go:141] libmachine: (multinode-058614) DBG | exit 0
	I1002 19:50:20.143149  409972 main.go:141] libmachine: (multinode-058614) DBG | SSH cmd err, output: <nil>: 
	I1002 19:50:20.143469  409972 main.go:141] libmachine: (multinode-058614) KVM machine creation complete!
	I1002 19:50:20.143779  409972 main.go:141] libmachine: (multinode-058614) Calling .GetConfigRaw
	I1002 19:50:20.144336  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:20.144563  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:20.144740  409972 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 19:50:20.144754  409972 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:50:20.146298  409972 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 19:50:20.146314  409972 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 19:50:20.146320  409972 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 19:50:20.146327  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.148582  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.148942  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.148980  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.149052  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.149272  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.149417  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.149554  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.149700  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:20.150107  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:20.150122  409972 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 19:50:20.270792  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:50:20.270814  409972 main.go:141] libmachine: Detecting the provisioner...
	I1002 19:50:20.270822  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.273880  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.274324  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.274371  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.274428  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.274658  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.274859  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.274984  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.275121  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:20.275493  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:20.275511  409972 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 19:50:20.396137  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 19:50:20.396232  409972 main.go:141] libmachine: found compatible host: buildroot
	I1002 19:50:20.396248  409972 main.go:141] libmachine: Provisioning with buildroot...
	I1002 19:50:20.396261  409972 main.go:141] libmachine: (multinode-058614) Calling .GetMachineName
	I1002 19:50:20.396508  409972 buildroot.go:166] provisioning hostname "multinode-058614"
	I1002 19:50:20.396544  409972 main.go:141] libmachine: (multinode-058614) Calling .GetMachineName
	I1002 19:50:20.396830  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.399277  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.399664  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.399699  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.399808  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.400023  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.400233  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.400402  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.400572  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:20.400893  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:20.400907  409972 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-058614 && echo "multinode-058614" | sudo tee /etc/hostname
	I1002 19:50:20.535872  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-058614
	
	I1002 19:50:20.535908  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.538634  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.538946  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.538988  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.539123  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.539367  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.539575  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.539734  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.539894  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:20.540199  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:20.540222  409972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-058614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-058614/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-058614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:50:20.666953  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:50:20.666992  409972 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17323-390762/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-390762/.minikube}
	I1002 19:50:20.667034  409972 buildroot.go:174] setting up certificates
	I1002 19:50:20.667048  409972 provision.go:83] configureAuth start
	I1002 19:50:20.667061  409972 main.go:141] libmachine: (multinode-058614) Calling .GetMachineName
	I1002 19:50:20.667372  409972 main.go:141] libmachine: (multinode-058614) Calling .GetIP
	I1002 19:50:20.670137  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.670603  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.670639  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.670780  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.673699  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.674075  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.674109  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.674281  409972 provision.go:138] copyHostCerts
	I1002 19:50:20.674313  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
	I1002 19:50:20.674352  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem, removing ...
	I1002 19:50:20.674363  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
	I1002 19:50:20.674421  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem (1078 bytes)
	I1002 19:50:20.674511  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
	I1002 19:50:20.674532  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem, removing ...
	I1002 19:50:20.674539  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
	I1002 19:50:20.674558  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem (1123 bytes)
	I1002 19:50:20.674609  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
	I1002 19:50:20.674629  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem, removing ...
	I1002 19:50:20.674645  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
	I1002 19:50:20.674670  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem (1675 bytes)
	I1002 19:50:20.674733  409972 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem org=jenkins.multinode-058614 san=[192.168.39.83 192.168.39.83 localhost 127.0.0.1 minikube multinode-058614]
	I1002 19:50:20.782693  409972 provision.go:172] copyRemoteCerts
	I1002 19:50:20.782763  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:50:20.782791  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.785638  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.785953  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.785991  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.786149  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.786360  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.786559  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.786766  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:20.877543  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 19:50:20.877621  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 19:50:20.899625  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 19:50:20.899696  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 19:50:20.920572  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 19:50:20.920633  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 19:50:20.941546  409972 provision.go:86] duration metric: configureAuth took 274.481852ms
	I1002 19:50:20.941570  409972 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:50:20.941777  409972 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:50:20.941805  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:20.942115  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:20.944684  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.945004  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:20.945031  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:20.945178  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:20.945364  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.945518  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:20.945678  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:20.945851  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:20.946170  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:20.946183  409972 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:50:21.069219  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:50:21.069249  409972 buildroot.go:70] root file system type: tmpfs
	I1002 19:50:21.069420  409972 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:50:21.069450  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:21.072452  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:21.072831  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:21.072865  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:21.073060  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:21.073305  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:21.073493  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:21.073691  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:21.073878  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:21.074218  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:21.074295  409972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:50:21.208467  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:50:21.208523  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:21.211611  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:21.211937  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:21.211969  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:21.212188  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:21.212436  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:21.212610  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:21.212760  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:21.212994  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:21.213320  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:21.213347  409972 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:50:22.003288  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:50:22.003320  409972 main.go:141] libmachine: Checking connection to Docker...
	I1002 19:50:22.003330  409972 main.go:141] libmachine: (multinode-058614) Calling .GetURL
	I1002 19:50:22.004695  409972 main.go:141] libmachine: (multinode-058614) DBG | Using libvirt version 6000000
	I1002 19:50:22.007166  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.007534  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.007571  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.007745  409972 main.go:141] libmachine: Docker is up and running!
	I1002 19:50:22.007761  409972 main.go:141] libmachine: Reticulating splines...
	I1002 19:50:22.007769  409972 client.go:171] LocalClient.Create took 23.714267173s
	I1002 19:50:22.007801  409972 start.go:167] duration metric: libmachine.API.Create for "multinode-058614" took 23.714340895s
	I1002 19:50:22.007814  409972 start.go:300] post-start starting for "multinode-058614" (driver="kvm2")
	I1002 19:50:22.007829  409972 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:50:22.007850  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:22.008227  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:50:22.008271  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:22.010493  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.010837  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.010869  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.010968  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:22.011159  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:22.011346  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:22.011527  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:22.100392  409972 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:50:22.104220  409972 command_runner.go:130] > NAME=Buildroot
	I1002 19:50:22.104247  409972 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 19:50:22.104254  409972 command_runner.go:130] > ID=buildroot
	I1002 19:50:22.104262  409972 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 19:50:22.104270  409972 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 19:50:22.104331  409972 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 19:50:22.104357  409972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/addons for local assets ...
	I1002 19:50:22.104419  409972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/files for local assets ...
	I1002 19:50:22.104530  409972 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> 3979952.pem in /etc/ssl/certs
	I1002 19:50:22.104548  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> /etc/ssl/certs/3979952.pem
	I1002 19:50:22.104665  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:50:22.112611  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /etc/ssl/certs/3979952.pem (1708 bytes)
	I1002 19:50:22.135640  409972 start.go:303] post-start completed in 127.80622ms
	I1002 19:50:22.135703  409972 main.go:141] libmachine: (multinode-058614) Calling .GetConfigRaw
	I1002 19:50:22.136344  409972 main.go:141] libmachine: (multinode-058614) Calling .GetIP
	I1002 19:50:22.140370  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.140834  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.140868  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.141079  409972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:50:22.141255  409972 start.go:128] duration metric: createHost completed in 23.865401826s
	I1002 19:50:22.141278  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:22.143429  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.143750  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.143776  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.143894  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:22.144069  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:22.144205  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:22.144364  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:22.144514  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:50:22.144842  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1002 19:50:22.144854  409972 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 19:50:22.268035  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696276222.250135687
	
	I1002 19:50:22.268066  409972 fix.go:206] guest clock: 1696276222.250135687
	I1002 19:50:22.268076  409972 fix.go:219] Guest: 2023-10-02 19:50:22.250135687 +0000 UTC Remote: 2023-10-02 19:50:22.14126695 +0000 UTC m=+23.968959937 (delta=108.868737ms)
	I1002 19:50:22.268102  409972 fix.go:190] guest clock delta is within tolerance: 108.868737ms
	I1002 19:50:22.268114  409972 start.go:83] releasing machines lock for "multinode-058614", held for 23.992639554s
	I1002 19:50:22.268140  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:22.268477  409972 main.go:141] libmachine: (multinode-058614) Calling .GetIP
	I1002 19:50:22.271312  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.271724  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.271753  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.271907  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:22.272543  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:22.272741  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:22.272818  409972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:50:22.272872  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:22.273052  409972 ssh_runner.go:195] Run: cat /version.json
	I1002 19:50:22.273083  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:22.275680  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.275819  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.276111  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.276151  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:22.276179  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.276197  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:22.276411  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:22.276530  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:22.276637  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:22.276696  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:22.276758  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:22.276833  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:22.276900  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:22.276958  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:22.385585  409972 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 19:50:22.386536  409972 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1002 19:50:22.386711  409972 ssh_runner.go:195] Run: systemctl --version
	I1002 19:50:22.391966  409972 command_runner.go:130] > systemd 247 (247)
	I1002 19:50:22.391988  409972 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1002 19:50:22.392419  409972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 19:50:22.397695  409972 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 19:50:22.397732  409972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:50:22.397777  409972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:50:22.412627  409972 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1002 19:50:22.412660  409972 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:50:22.412674  409972 start.go:469] detecting cgroup driver to use...
	I1002 19:50:22.412806  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:50:22.430915  409972 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 19:50:22.431314  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 19:50:22.441268  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:50:22.451234  409972 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:50:22.451324  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:50:22.461348  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:50:22.471415  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:50:22.481836  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:50:22.492058  409972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:50:22.502654  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:50:22.511808  409972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:50:22.520656  409972 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 19:50:22.520766  409972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:50:22.529867  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:50:22.631431  409972 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:50:22.650698  409972 start.go:469] detecting cgroup driver to use...
	I1002 19:50:22.650789  409972 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:50:22.665390  409972 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1002 19:50:22.666204  409972 command_runner.go:130] > [Unit]
	I1002 19:50:22.666221  409972 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 19:50:22.666227  409972 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 19:50:22.666232  409972 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1002 19:50:22.666238  409972 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1002 19:50:22.666246  409972 command_runner.go:130] > StartLimitBurst=3
	I1002 19:50:22.666252  409972 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 19:50:22.666259  409972 command_runner.go:130] > [Service]
	I1002 19:50:22.666266  409972 command_runner.go:130] > Type=notify
	I1002 19:50:22.666279  409972 command_runner.go:130] > Restart=on-failure
	I1002 19:50:22.666292  409972 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 19:50:22.666302  409972 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 19:50:22.666311  409972 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 19:50:22.666320  409972 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 19:50:22.666329  409972 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 19:50:22.666337  409972 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 19:50:22.666378  409972 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 19:50:22.666393  409972 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 19:50:22.666405  409972 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 19:50:22.666415  409972 command_runner.go:130] > ExecStart=
	I1002 19:50:22.666440  409972 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1002 19:50:22.666451  409972 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 19:50:22.666460  409972 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 19:50:22.666468  409972 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 19:50:22.666475  409972 command_runner.go:130] > LimitNOFILE=infinity
	I1002 19:50:22.666479  409972 command_runner.go:130] > LimitNPROC=infinity
	I1002 19:50:22.666483  409972 command_runner.go:130] > LimitCORE=infinity
	I1002 19:50:22.666500  409972 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 19:50:22.666508  409972 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 19:50:22.666520  409972 command_runner.go:130] > TasksMax=infinity
	I1002 19:50:22.666528  409972 command_runner.go:130] > TimeoutStartSec=0
	I1002 19:50:22.666539  409972 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 19:50:22.666547  409972 command_runner.go:130] > Delegate=yes
	I1002 19:50:22.666556  409972 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 19:50:22.666564  409972 command_runner.go:130] > KillMode=process
	I1002 19:50:22.666571  409972 command_runner.go:130] > [Install]
	I1002 19:50:22.666589  409972 command_runner.go:130] > WantedBy=multi-user.target
	I1002 19:50:22.666671  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:50:22.678344  409972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:50:22.697507  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:50:22.709557  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:50:22.721766  409972 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 19:50:22.753868  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:50:22.766757  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:50:22.783666  409972 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 19:50:22.783751  409972 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:50:22.787040  409972 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 19:50:22.787286  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:50:22.795142  409972 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:50:22.810612  409972 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:50:22.911628  409972 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:50:23.025667  409972 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:50:23.025850  409972 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:50:23.042226  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:50:23.143027  409972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:50:24.533455  409972 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.390377888s)
	I1002 19:50:24.533550  409972 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:50:24.634393  409972 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:50:24.745584  409972 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:50:24.850096  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:50:24.958473  409972 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:50:24.975487  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:50:25.075703  409972 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 19:50:25.156707  409972 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 19:50:25.156808  409972 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 19:50:25.162736  409972 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 19:50:25.162768  409972 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 19:50:25.162779  409972 command_runner.go:130] > Device: 16h/22d	Inode: 863         Links: 1
	I1002 19:50:25.162798  409972 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1002 19:50:25.162808  409972 command_runner.go:130] > Access: 2023-10-02 19:50:25.075913226 +0000
	I1002 19:50:25.162816  409972 command_runner.go:130] > Modify: 2023-10-02 19:50:25.075913226 +0000
	I1002 19:50:25.162823  409972 command_runner.go:130] > Change: 2023-10-02 19:50:25.077916143 +0000
	I1002 19:50:25.162834  409972 command_runner.go:130] >  Birth: -
	I1002 19:50:25.162993  409972 start.go:537] Will wait 60s for crictl version
	I1002 19:50:25.163059  409972 ssh_runner.go:195] Run: which crictl
	I1002 19:50:25.170719  409972 command_runner.go:130] > /usr/bin/crictl
	I1002 19:50:25.170984  409972 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 19:50:25.226372  409972 command_runner.go:130] > Version:  0.1.0
	I1002 19:50:25.226399  409972 command_runner.go:130] > RuntimeName:  docker
	I1002 19:50:25.226406  409972 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 19:50:25.226414  409972 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 19:50:25.226435  409972 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 19:50:25.226505  409972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:50:25.251966  409972 command_runner.go:130] > 24.0.6
	I1002 19:50:25.252403  409972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:50:25.277871  409972 command_runner.go:130] > 24.0.6
	I1002 19:50:25.279756  409972 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 19:50:25.279809  409972 main.go:141] libmachine: (multinode-058614) Calling .GetIP
	I1002 19:50:25.282994  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:25.283432  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:25.283484  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:25.283710  409972 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 19:50:25.287692  409972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:50:25.299098  409972 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:50:25.299170  409972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:50:25.316791  409972 docker.go:664] Got preloaded images: 
	I1002 19:50:25.316819  409972 docker.go:670] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1002 19:50:25.316872  409972 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 19:50:25.326203  409972 command_runner.go:139] > {"Repositories":{}}
	I1002 19:50:25.326315  409972 ssh_runner.go:195] Run: which lz4
	I1002 19:50:25.329872  409972 command_runner.go:130] > /usr/bin/lz4
	I1002 19:50:25.329952  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1002 19:50:25.330047  409972 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 19:50:25.333785  409972 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 19:50:25.333915  409972 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 19:50:25.333939  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422207204 bytes)
	I1002 19:50:26.834180  409972 docker.go:628] Took 1.504156 seconds to copy over tarball
	I1002 19:50:26.834276  409972 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 19:50:29.389192  409972 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.554876475s)
	I1002 19:50:29.389237  409972 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 19:50:29.427482  409972 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1002 19:50:29.437302  409972 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.2":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.2":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.2":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e
61df5900fa0bb0"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.2":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I1002 19:50:29.437475  409972 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1002 19:50:29.453172  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:50:29.565716  409972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:50:32.211501  409972 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.645737165s)
	I1002 19:50:32.211613  409972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:50:32.229726  409972 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1002 19:50:32.229765  409972 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1002 19:50:32.229773  409972 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1002 19:50:32.229782  409972 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1002 19:50:32.229790  409972 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1002 19:50:32.229798  409972 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1002 19:50:32.229804  409972 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1002 19:50:32.229813  409972 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:50:32.230911  409972 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1002 19:50:32.230933  409972 cache_images.go:84] Images are preloaded, skipping loading
	I1002 19:50:32.231009  409972 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:50:32.256559  409972 command_runner.go:130] > cgroupfs
	I1002 19:50:32.256852  409972 cni.go:84] Creating CNI manager for ""
	I1002 19:50:32.256874  409972 cni.go:136] 1 nodes found, recommending kindnet
	I1002 19:50:32.256930  409972 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 19:50:32.256966  409972 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.83 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-058614 NodeName:multinode-058614 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.83 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:50:32.257157  409972 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.83
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-058614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.83
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:50:32.257251  409972 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-058614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 19:50:32.257314  409972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 19:50:32.265937  409972 command_runner.go:130] > kubeadm
	I1002 19:50:32.265954  409972 command_runner.go:130] > kubectl
	I1002 19:50:32.265958  409972 command_runner.go:130] > kubelet
	I1002 19:50:32.265975  409972 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:50:32.266023  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:50:32.273907  409972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1002 19:50:32.289390  409972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:50:32.305046  409972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1002 19:50:32.321056  409972 ssh_runner.go:195] Run: grep 192.168.39.83	control-plane.minikube.internal$ /etc/hosts
	I1002 19:50:32.324634  409972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:50:32.336505  409972 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614 for IP: 192.168.39.83
	I1002 19:50:32.336533  409972 certs.go:190] acquiring lock for shared ca certs: {Name:mkd9eff411eb4f3b431b8dec98af3335c0ce4ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.336680  409972 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key
	I1002 19:50:32.336718  409972 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key
	I1002 19:50:32.336761  409972 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key
	I1002 19:50:32.336775  409972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt with IP's: []
	I1002 19:50:32.391186  409972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt ...
	I1002 19:50:32.391219  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt: {Name:mkc50134370d4124fef569445b951c5232308e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.391394  409972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key ...
	I1002 19:50:32.391406  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key: {Name:mkdb44d9484e15905f1c2a5e5e8fd11c1cb278c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.391519  409972 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key.af66a62a
	I1002 19:50:32.391537  409972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt.af66a62a with IP's: [192.168.39.83 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 19:50:32.500531  409972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt.af66a62a ...
	I1002 19:50:32.500563  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt.af66a62a: {Name:mk6c504d25fd82a7c27576f2fb192bc062dbf806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.500725  409972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key.af66a62a ...
	I1002 19:50:32.500736  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key.af66a62a: {Name:mkb36d231f099cac1148763c92c82e1f5e10337a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.500803  409972 certs.go:337] copying /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt.af66a62a -> /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt
	I1002 19:50:32.500870  409972 certs.go:341] copying /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key.af66a62a -> /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key
	I1002 19:50:32.500921  409972 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.key
	I1002 19:50:32.500934  409972 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.crt with IP's: []
	I1002 19:50:32.559779  409972 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.crt ...
	I1002 19:50:32.559808  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.crt: {Name:mkac566f6e92e672dee05d7a925b4af9850bddc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.559956  409972 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.key ...
	I1002 19:50:32.559967  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.key: {Name:mk78bad7f71bf378e7b78331b414e76aef88994a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:32.560028  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 19:50:32.560047  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 19:50:32.560057  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 19:50:32.560069  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 19:50:32.560088  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 19:50:32.560101  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 19:50:32.560110  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 19:50:32.560122  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 19:50:32.560172  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem (1338 bytes)
	W1002 19:50:32.560205  409972 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995_empty.pem, impossibly tiny 0 bytes
	I1002 19:50:32.560222  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 19:50:32.560243  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem (1078 bytes)
	I1002 19:50:32.560266  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:50:32.560288  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem (1675 bytes)
	I1002 19:50:32.560333  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem (1708 bytes)
	I1002 19:50:32.560354  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:50:32.560365  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem -> /usr/share/ca-certificates/397995.pem
	I1002 19:50:32.560376  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> /usr/share/ca-certificates/3979952.pem
	I1002 19:50:32.560899  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 19:50:32.585792  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:50:32.608846  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:50:32.631579  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 19:50:32.654108  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:50:32.676751  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:50:32.698937  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:50:32.721515  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 19:50:32.743860  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:50:32.766533  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem --> /usr/share/ca-certificates/397995.pem (1338 bytes)
	I1002 19:50:32.788838  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /usr/share/ca-certificates/3979952.pem (1708 bytes)
	I1002 19:50:32.814215  409972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:50:32.831650  409972 ssh_runner.go:195] Run: openssl version
	I1002 19:50:32.837231  409972 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 19:50:32.837297  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3979952.pem && ln -fs /usr/share/ca-certificates/3979952.pem /etc/ssl/certs/3979952.pem"
	I1002 19:50:32.846958  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3979952.pem
	I1002 19:50:32.851666  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 19:38 /usr/share/ca-certificates/3979952.pem
	I1002 19:50:32.851694  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:38 /usr/share/ca-certificates/3979952.pem
	I1002 19:50:32.851739  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3979952.pem
	I1002 19:50:32.857044  409972 command_runner.go:130] > 3ec20f2e
	I1002 19:50:32.857346  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3979952.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:50:32.866957  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:50:32.876672  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:50:32.881216  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:33 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:50:32.881240  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:33 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:50:32.881272  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:50:32.886904  409972 command_runner.go:130] > b5213941
	I1002 19:50:32.886953  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:50:32.899154  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/397995.pem && ln -fs /usr/share/ca-certificates/397995.pem /etc/ssl/certs/397995.pem"
	I1002 19:50:32.908708  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/397995.pem
	I1002 19:50:32.913320  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 19:38 /usr/share/ca-certificates/397995.pem
	I1002 19:50:32.913342  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:38 /usr/share/ca-certificates/397995.pem
	I1002 19:50:32.913369  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/397995.pem
	I1002 19:50:32.918732  409972 command_runner.go:130] > 51391683
	I1002 19:50:32.919052  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/397995.pem /etc/ssl/certs/51391683.0"
	I1002 19:50:32.928651  409972 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 19:50:32.932856  409972 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 19:50:32.932890  409972 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 19:50:32.932934  409972 kubeadm.go:404] StartCluster: {Name:multinode-058614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:50:32.933030  409972 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:50:32.955195  409972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:50:32.964068  409972 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1002 19:50:32.964100  409972 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1002 19:50:32.964111  409972 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1002 19:50:32.964433  409972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:50:32.973015  409972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:50:32.982025  409972 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1002 19:50:32.982044  409972 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1002 19:50:32.982055  409972 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1002 19:50:32.982067  409972 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:50:32.982179  409972 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:50:32.982240  409972 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 19:50:33.093679  409972 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 19:50:33.093713  409972 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1002 19:50:33.093792  409972 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 19:50:33.093805  409972 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 19:50:33.379058  409972 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 19:50:33.379120  409972 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 19:50:33.379259  409972 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 19:50:33.379285  409972 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 19:50:33.379423  409972 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 19:50:33.379435  409972 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 19:50:33.729512  409972 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 19:50:33.729614  409972 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 19:50:33.797235  409972 out.go:204]   - Generating certificates and keys ...
	I1002 19:50:33.797445  409972 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 19:50:33.797468  409972 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 19:50:33.797578  409972 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 19:50:33.797592  409972 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 19:50:33.873313  409972 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 19:50:33.873371  409972 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 19:50:33.938184  409972 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 19:50:33.938216  409972 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1002 19:50:34.250312  409972 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 19:50:34.250348  409972 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1002 19:50:34.488474  409972 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 19:50:34.488508  409972 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1002 19:50:34.779008  409972 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 19:50:34.779042  409972 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1002 19:50:34.779258  409972 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-058614] and IPs [192.168.39.83 127.0.0.1 ::1]
	I1002 19:50:34.779275  409972 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-058614] and IPs [192.168.39.83 127.0.0.1 ::1]
	I1002 19:50:34.986941  409972 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 19:50:34.986976  409972 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1002 19:50:34.987131  409972 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-058614] and IPs [192.168.39.83 127.0.0.1 ::1]
	I1002 19:50:34.987143  409972 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-058614] and IPs [192.168.39.83 127.0.0.1 ::1]
	I1002 19:50:35.046330  409972 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 19:50:35.046360  409972 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 19:50:35.193304  409972 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 19:50:35.193336  409972 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 19:50:35.261559  409972 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 19:50:35.261603  409972 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1002 19:50:35.261697  409972 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 19:50:35.261711  409972 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 19:50:35.597989  409972 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 19:50:35.598023  409972 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 19:50:35.844463  409972 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 19:50:35.844502  409972 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 19:50:36.122691  409972 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 19:50:36.122733  409972 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 19:50:36.248806  409972 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 19:50:36.248842  409972 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 19:50:36.249552  409972 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 19:50:36.249579  409972 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 19:50:36.256584  409972 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 19:50:36.259507  409972 out.go:204]   - Booting up control plane ...
	I1002 19:50:36.256633  409972 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 19:50:36.259621  409972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 19:50:36.259628  409972 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 19:50:36.259767  409972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 19:50:36.259782  409972 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 19:50:36.259882  409972 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 19:50:36.259898  409972 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 19:50:36.279810  409972 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:50:36.279839  409972 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:50:36.279941  409972 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:50:36.279953  409972 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:50:36.279996  409972 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 19:50:36.280024  409972 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 19:50:36.401662  409972 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 19:50:36.401692  409972 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 19:50:43.405558  409972 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004807 seconds
	I1002 19:50:43.405596  409972 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.004807 seconds
	I1002 19:50:43.405782  409972 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 19:50:43.405785  409972 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 19:50:43.424851  409972 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 19:50:43.424884  409972 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 19:50:43.953218  409972 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 19:50:43.953244  409972 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1002 19:50:43.953508  409972 kubeadm.go:322] [mark-control-plane] Marking the node multinode-058614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 19:50:43.953520  409972 command_runner.go:130] > [mark-control-plane] Marking the node multinode-058614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 19:50:44.467533  409972 kubeadm.go:322] [bootstrap-token] Using token: bgxunb.6ywtiru1lyvivxig
	I1002 19:50:44.469109  409972 out.go:204]   - Configuring RBAC rules ...
	I1002 19:50:44.467609  409972 command_runner.go:130] > [bootstrap-token] Using token: bgxunb.6ywtiru1lyvivxig
	I1002 19:50:44.469248  409972 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 19:50:44.469268  409972 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 19:50:44.476949  409972 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 19:50:44.476974  409972 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 19:50:44.484328  409972 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 19:50:44.484352  409972 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 19:50:44.490926  409972 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 19:50:44.490947  409972 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 19:50:44.495276  409972 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 19:50:44.495301  409972 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 19:50:44.501402  409972 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 19:50:44.501419  409972 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 19:50:44.524355  409972 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 19:50:44.524391  409972 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 19:50:44.814606  409972 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 19:50:44.814640  409972 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 19:50:44.883654  409972 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 19:50:44.883688  409972 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 19:50:44.886897  409972 kubeadm.go:322] 
	I1002 19:50:44.886978  409972 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 19:50:44.886996  409972 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1002 19:50:44.887002  409972 kubeadm.go:322] 
	I1002 19:50:44.887064  409972 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 19:50:44.887070  409972 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1002 19:50:44.887076  409972 kubeadm.go:322] 
	I1002 19:50:44.887108  409972 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 19:50:44.887113  409972 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1002 19:50:44.887308  409972 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 19:50:44.887326  409972 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 19:50:44.887367  409972 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 19:50:44.887380  409972 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 19:50:44.887396  409972 kubeadm.go:322] 
	I1002 19:50:44.887498  409972 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 19:50:44.887520  409972 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1002 19:50:44.887527  409972 kubeadm.go:322] 
	I1002 19:50:44.887594  409972 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 19:50:44.887608  409972 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 19:50:44.887617  409972 kubeadm.go:322] 
	I1002 19:50:44.887690  409972 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 19:50:44.887703  409972 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1002 19:50:44.887814  409972 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 19:50:44.887826  409972 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 19:50:44.887910  409972 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 19:50:44.887922  409972 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 19:50:44.887928  409972 kubeadm.go:322] 
	I1002 19:50:44.888094  409972 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 19:50:44.888112  409972 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1002 19:50:44.888239  409972 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 19:50:44.888258  409972 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1002 19:50:44.888264  409972 kubeadm.go:322] 
	I1002 19:50:44.888463  409972 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token bgxunb.6ywtiru1lyvivxig \
	I1002 19:50:44.888481  409972 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token bgxunb.6ywtiru1lyvivxig \
	I1002 19:50:44.888609  409972 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad \
	I1002 19:50:44.888618  409972 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad \
	I1002 19:50:44.888682  409972 kubeadm.go:322] 	--control-plane 
	I1002 19:50:44.888699  409972 command_runner.go:130] > 	--control-plane 
	I1002 19:50:44.888714  409972 kubeadm.go:322] 
	I1002 19:50:44.888804  409972 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 19:50:44.888810  409972 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1002 19:50:44.888814  409972 kubeadm.go:322] 
	I1002 19:50:44.888915  409972 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token bgxunb.6ywtiru1lyvivxig \
	I1002 19:50:44.888927  409972 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bgxunb.6ywtiru1lyvivxig \
	I1002 19:50:44.889071  409972 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad 
	I1002 19:50:44.889085  409972 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad 
	I1002 19:50:44.891094  409972 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:50:44.891108  409972 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:50:44.891402  409972 cni.go:84] Creating CNI manager for ""
	I1002 19:50:44.891423  409972 cni.go:136] 1 nodes found, recommending kindnet
	I1002 19:50:44.893021  409972 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 19:50:44.894105  409972 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 19:50:44.900830  409972 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 19:50:44.900849  409972 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 19:50:44.900856  409972 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 19:50:44.900863  409972 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 19:50:44.900869  409972 command_runner.go:130] > Access: 2023-10-02 19:50:11.118876540 +0000
	I1002 19:50:44.900874  409972 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 19:50:44.900879  409972 command_runner.go:130] > Change: 2023-10-02 19:50:09.361876540 +0000
	I1002 19:50:44.900882  409972 command_runner.go:130] >  Birth: -
	I1002 19:50:44.901513  409972 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 19:50:44.901527  409972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 19:50:44.966714  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 19:50:46.123709  409972 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1002 19:50:46.130622  409972 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1002 19:50:46.139954  409972 command_runner.go:130] > serviceaccount/kindnet created
	I1002 19:50:46.154995  409972 command_runner.go:130] > daemonset.apps/kindnet created
	I1002 19:50:46.158006  409972 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1912537s)
	I1002 19:50:46.158060  409972 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 19:50:46.158166  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:46.158169  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86 minikube.k8s.io/name=multinode-058614 minikube.k8s.io/updated_at=2023_10_02T19_50_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:46.186444  409972 command_runner.go:130] > -16
	I1002 19:50:46.186561  409972 ops.go:34] apiserver oom_adj: -16
	I1002 19:50:46.396178  409972 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1002 19:50:46.396235  409972 command_runner.go:130] > node/multinode-058614 labeled
	I1002 19:50:46.396436  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:46.484384  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:46.484502  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:46.594659  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:47.097526  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:47.199647  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:47.597031  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:47.693286  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:48.097936  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:48.191352  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:48.597396  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:48.690079  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:49.097708  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:49.186659  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:49.597255  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:49.696841  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:50.097496  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:50.189560  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:50.597097  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:50.692593  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:51.097670  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:51.179336  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:51.596991  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:51.685477  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:52.097677  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:52.181523  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:52.597112  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:52.685296  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:53.096974  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:53.214873  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:53.597064  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:53.690106  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:54.096947  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:54.190222  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:54.597814  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:54.691802  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:55.097103  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:55.187639  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:55.597190  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:55.710317  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:56.097867  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:56.206849  409972 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 19:50:56.597904  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 19:50:56.712697  409972 command_runner.go:130] > NAME      SECRETS   AGE
	I1002 19:50:56.712720  409972 command_runner.go:130] > default   0         0s
	I1002 19:50:56.714566  409972 kubeadm.go:1081] duration metric: took 10.556471493s to wait for elevateKubeSystemPrivileges.
	I1002 19:50:56.714596  409972 kubeadm.go:406] StartCluster complete in 23.781668076s
	I1002 19:50:56.714625  409972 settings.go:142] acquiring lock: {Name:mkb4ca40f1939e3445461ba1faa925717a2f2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:56.714771  409972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:50:56.715777  409972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/kubeconfig: {Name:mk74ddabf197e37062c31902aa8bd3a9b6ce152f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:50:56.716100  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 19:50:56.716129  409972 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 19:50:56.716246  409972 addons.go:69] Setting storage-provisioner=true in profile "multinode-058614"
	I1002 19:50:56.716279  409972 addons.go:69] Setting default-storageclass=true in profile "multinode-058614"
	I1002 19:50:56.716301  409972 addons.go:231] Setting addon storage-provisioner=true in "multinode-058614"
	I1002 19:50:56.716308  409972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-058614"
	I1002 19:50:56.716346  409972 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:50:56.716383  409972 host.go:66] Checking if "multinode-058614" exists ...
	I1002 19:50:56.716526  409972 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:50:56.716682  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:50:56.716721  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:50:56.716780  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:50:56.716818  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:50:56.716854  409972 kapi.go:59] client config for multinode-058614: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key", CAFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:50:56.717799  409972 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 19:50:56.718174  409972 round_trippers.go:463] GET https://192.168.39.83:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 19:50:56.718194  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:56.718204  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:56.718216  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:56.728265  409972 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1002 19:50:56.728285  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:56.728295  409972 round_trippers.go:580]     Audit-Id: 4496b0d1-9c70-45a3-886b-4a241640f11a
	I1002 19:50:56.728303  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:56.728312  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:56.728321  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:56.728333  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:56.728345  409972 round_trippers.go:580]     Content-Length: 291
	I1002 19:50:56.728356  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:56 GMT
	I1002 19:50:56.728390  409972 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"37a26193-5816-4ab7-acfa-78d217f28a0e","resourceVersion":"273","creationTimestamp":"2023-10-02T19:50:44Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 19:50:56.728897  409972 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"37a26193-5816-4ab7-acfa-78d217f28a0e","resourceVersion":"273","creationTimestamp":"2023-10-02T19:50:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 19:50:56.728966  409972 round_trippers.go:463] PUT https://192.168.39.83:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 19:50:56.728977  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:56.728988  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:56.729000  409972 round_trippers.go:473]     Content-Type: application/json
	I1002 19:50:56.729010  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:56.732178  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1002 19:50:56.732629  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:50:56.733057  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:50:56.733077  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:50:56.733430  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:50:56.734029  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:50:56.734083  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:50:56.735341  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I1002 19:50:56.735791  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:50:56.736348  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:50:56.736374  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:50:56.736718  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:50:56.736910  409972 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:50:56.739081  409972 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:50:56.739380  409972 kapi.go:59] client config for multinode-058614: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key", CAFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:50:56.739705  409972 addons.go:231] Setting addon default-storageclass=true in "multinode-058614"
	I1002 19:50:56.739742  409972 host.go:66] Checking if "multinode-058614" exists ...
	I1002 19:50:56.740118  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:50:56.740163  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:50:56.741969  409972 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1002 19:50:56.741988  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:56.741998  409972 round_trippers.go:580]     Audit-Id: db5716cf-cdf9-4571-a1cd-0cf242940fa4
	I1002 19:50:56.742007  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:56.742016  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:56.742025  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:56.742039  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:56.742050  409972 round_trippers.go:580]     Content-Length: 291
	I1002 19:50:56.742059  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:56 GMT
	I1002 19:50:56.742517  409972 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"37a26193-5816-4ab7-acfa-78d217f28a0e","resourceVersion":"348","creationTimestamp":"2023-10-02T19:50:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 19:50:56.742684  409972 round_trippers.go:463] GET https://192.168.39.83:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 19:50:56.742699  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:56.742710  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:56.742723  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:56.745216  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:56.745231  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:56.745241  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:56 GMT
	I1002 19:50:56.745250  409972 round_trippers.go:580]     Audit-Id: a49dfdea-6714-40c4-8708-b2de3856e521
	I1002 19:50:56.745278  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:56.745289  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:56.745298  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:56.745309  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:56.745319  409972 round_trippers.go:580]     Content-Length: 291
	I1002 19:50:56.745343  409972 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"37a26193-5816-4ab7-acfa-78d217f28a0e","resourceVersion":"348","creationTimestamp":"2023-10-02T19:50:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 19:50:56.745426  409972 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-058614" context rescaled to 1 replicas
	I1002 19:50:56.745458  409972 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:50:56.747278  409972 out.go:177] * Verifying Kubernetes components...
	I1002 19:50:56.748757  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:50:56.749999  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
	I1002 19:50:56.750397  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:50:56.751009  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:50:56.751029  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:50:56.751483  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:50:56.751698  409972 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:50:56.753332  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:56.754930  409972 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:50:56.756280  409972 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:50:56.756298  409972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 19:50:56.756317  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:56.755664  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35797
	I1002 19:50:56.757007  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:50:56.757553  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:50:56.757580  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:50:56.757969  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:50:56.758540  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:50:56.758589  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:50:56.759545  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:56.759971  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:56.760003  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:56.760117  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:56.760364  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:56.760530  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:56.760718  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:56.772733  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39047
	I1002 19:50:56.773118  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:50:56.773631  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:50:56.773661  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:50:56.774011  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:50:56.774203  409972 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:50:56.775804  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:50:56.776025  409972 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 19:50:56.776040  409972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 19:50:56.776054  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:50:56.779130  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:56.779549  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:50:56.779581  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:50:56.779748  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:50:56.779924  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:50:56.780060  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:50:56.780175  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:50:56.901104  409972 command_runner.go:130] > apiVersion: v1
	I1002 19:50:56.901133  409972 command_runner.go:130] > data:
	I1002 19:50:56.901140  409972 command_runner.go:130] >   Corefile: |
	I1002 19:50:56.901145  409972 command_runner.go:130] >     .:53 {
	I1002 19:50:56.901149  409972 command_runner.go:130] >         errors
	I1002 19:50:56.901155  409972 command_runner.go:130] >         health {
	I1002 19:50:56.901160  409972 command_runner.go:130] >            lameduck 5s
	I1002 19:50:56.901163  409972 command_runner.go:130] >         }
	I1002 19:50:56.901167  409972 command_runner.go:130] >         ready
	I1002 19:50:56.901173  409972 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 19:50:56.901177  409972 command_runner.go:130] >            pods insecure
	I1002 19:50:56.901197  409972 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 19:50:56.901206  409972 command_runner.go:130] >            ttl 30
	I1002 19:50:56.901213  409972 command_runner.go:130] >         }
	I1002 19:50:56.901221  409972 command_runner.go:130] >         prometheus :9153
	I1002 19:50:56.901229  409972 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 19:50:56.901246  409972 command_runner.go:130] >            max_concurrent 1000
	I1002 19:50:56.901252  409972 command_runner.go:130] >         }
	I1002 19:50:56.901257  409972 command_runner.go:130] >         cache 30
	I1002 19:50:56.901265  409972 command_runner.go:130] >         loop
	I1002 19:50:56.901268  409972 command_runner.go:130] >         reload
	I1002 19:50:56.901273  409972 command_runner.go:130] >         loadbalance
	I1002 19:50:56.901277  409972 command_runner.go:130] >     }
	I1002 19:50:56.901282  409972 command_runner.go:130] > kind: ConfigMap
	I1002 19:50:56.901289  409972 command_runner.go:130] > metadata:
	I1002 19:50:56.901302  409972 command_runner.go:130] >   creationTimestamp: "2023-10-02T19:50:44Z"
	I1002 19:50:56.901312  409972 command_runner.go:130] >   name: coredns
	I1002 19:50:56.901320  409972 command_runner.go:130] >   namespace: kube-system
	I1002 19:50:56.901328  409972 command_runner.go:130] >   resourceVersion: "269"
	I1002 19:50:56.901337  409972 command_runner.go:130] >   uid: c4b9862d-4b1b-45af-a32c-a1f35a9b7040
	I1002 19:50:56.902732  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 19:50:56.902979  409972 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:50:56.903251  409972 kapi.go:59] client config for multinode-058614: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key", CAFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:50:56.903550  409972 node_ready.go:35] waiting up to 6m0s for node "multinode-058614" to be "Ready" ...
	I1002 19:50:56.903679  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:56.903690  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:56.903702  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:56.903724  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:56.906034  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:56.906058  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:56.906067  409972 round_trippers.go:580]     Audit-Id: 121de801-6bcd-41d7-9544-129df21d918c
	I1002 19:50:56.906076  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:56.906084  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:56.906096  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:56.906105  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:56.906115  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:56 GMT
	I1002 19:50:56.906310  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"333","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19
:50:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 4823 chars]
	I1002 19:50:56.907053  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:56.907074  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:56.907085  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:56.907095  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:56.924048  409972 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1002 19:50:56.924081  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:56.924093  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:56.924106  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:56 GMT
	I1002 19:50:56.924115  409972 round_trippers.go:580]     Audit-Id: fa6825fb-ca1e-494a-aee2-e18d5a2fefdb
	I1002 19:50:56.924124  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:56.924134  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:56.924147  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:56.924442  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"333","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19
:50:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio [truncated 4823 chars]
	I1002 19:50:56.933135  409972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:50:56.999599  409972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 19:50:57.425156  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:57.425185  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:57.425203  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:57.425209  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:57.437193  409972 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1002 19:50:57.437229  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:57.437241  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:57 GMT
	I1002 19:50:57.437250  409972 round_trippers.go:580]     Audit-Id: 2327917d-3b4e-45f3-a9b3-ea80f699cbc7
	I1002 19:50:57.437258  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:57.437267  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:57.437275  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:57.437283  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:57.437414  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:50:57.926110  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:57.926139  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:57.926151  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:57.926158  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:57.930953  409972 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 19:50:57.930990  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:57.931000  409972 round_trippers.go:580]     Audit-Id: 6b18ff98-5898-4871-a491-87aed5ae6138
	I1002 19:50:57.931008  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:57.931015  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:57.931023  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:57.931032  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:57.931040  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:57 GMT
	I1002 19:50:57.931214  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:50:58.426117  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:58.426163  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:58.426184  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:58.426194  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:58.428873  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:58.428901  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:58.428912  409972 round_trippers.go:580]     Audit-Id: d4bf84a7-0d79-4fb7-a72a-9a9c2339b6ef
	I1002 19:50:58.428922  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:58.428929  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:58.428935  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:58.428940  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:58.428946  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:58 GMT
	I1002 19:50:58.429131  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:50:58.438661  409972 command_runner.go:130] > configmap/coredns replaced
	I1002 19:50:58.445175  409972 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.542401757s)
	I1002 19:50:58.445212  409972 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1002 19:50:58.601766  409972 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1002 19:50:58.608183  409972 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1002 19:50:58.626309  409972 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 19:50:58.630521  409972 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 19:50:58.639136  409972 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1002 19:50:58.661009  409972 command_runner.go:130] > pod/storage-provisioner created
	I1002 19:50:58.667408  409972 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1002 19:50:58.667484  409972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.667827572s)
	I1002 19:50:58.667530  409972 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:58.667539  409972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.734368355s)
	I1002 19:50:58.667552  409972 main.go:141] libmachine: (multinode-058614) Calling .Close
	I1002 19:50:58.667587  409972 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:58.667605  409972 main.go:141] libmachine: (multinode-058614) Calling .Close
	I1002 19:50:58.667964  409972 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:58.667985  409972 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:58.667995  409972 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:58.668001  409972 main.go:141] libmachine: (multinode-058614) DBG | Closing plugin on server side
	I1002 19:50:58.668021  409972 main.go:141] libmachine: (multinode-058614) DBG | Closing plugin on server side
	I1002 19:50:58.668038  409972 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:58.668050  409972 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:58.668060  409972 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:58.668068  409972 main.go:141] libmachine: (multinode-058614) Calling .Close
	I1002 19:50:58.668003  409972 main.go:141] libmachine: (multinode-058614) Calling .Close
	I1002 19:50:58.668304  409972 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:58.668397  409972 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:58.668448  409972 main.go:141] libmachine: (multinode-058614) DBG | Closing plugin on server side
	I1002 19:50:58.668478  409972 main.go:141] libmachine: (multinode-058614) DBG | Closing plugin on server side
	I1002 19:50:58.668497  409972 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:58.668510  409972 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:58.668632  409972 round_trippers.go:463] GET https://192.168.39.83:8443/apis/storage.k8s.io/v1/storageclasses
	I1002 19:50:58.668646  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:58.668656  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:58.668665  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:58.671640  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:58.671663  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:58.671671  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:58.671677  409972 round_trippers.go:580]     Content-Length: 1273
	I1002 19:50:58.671682  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:58 GMT
	I1002 19:50:58.671687  409972 round_trippers.go:580]     Audit-Id: 2f515877-8fd3-4d86-9488-cb5d8e341ef5
	I1002 19:50:58.671692  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:58.671697  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:58.671702  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:58.671782  409972 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"standard","uid":"a513ac38-597c-4a51-bf17-e5ea81ee2a6c","resourceVersion":"408","creationTimestamp":"2023-10-02T19:50:58Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T19:50:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1002 19:50:58.672191  409972 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a513ac38-597c-4a51-bf17-e5ea81ee2a6c","resourceVersion":"408","creationTimestamp":"2023-10-02T19:50:58Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T19:50:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 19:50:58.672268  409972 round_trippers.go:463] PUT https://192.168.39.83:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1002 19:50:58.672278  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:58.672285  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:58.672292  409972 round_trippers.go:473]     Content-Type: application/json
	I1002 19:50:58.672300  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:58.674975  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:58.674991  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:58.674997  409972 round_trippers.go:580]     Audit-Id: e4d72544-9a22-4818-bad2-40d037b44519
	I1002 19:50:58.675002  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:58.675007  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:58.675012  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:58.675017  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:58.675022  409972 round_trippers.go:580]     Content-Length: 1220
	I1002 19:50:58.675029  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:58 GMT
	I1002 19:50:58.675063  409972 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a513ac38-597c-4a51-bf17-e5ea81ee2a6c","resourceVersion":"408","creationTimestamp":"2023-10-02T19:50:58Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T19:50:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 19:50:58.675204  409972 main.go:141] libmachine: Making call to close driver server
	I1002 19:50:58.675237  409972 main.go:141] libmachine: (multinode-058614) Calling .Close
	I1002 19:50:58.675492  409972 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:50:58.675515  409972 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:50:58.675534  409972 main.go:141] libmachine: (multinode-058614) DBG | Closing plugin on server side
	I1002 19:50:58.677167  409972 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 19:50:58.678579  409972 addons.go:502] enable addons completed in 1.962456745s: enabled=[storage-provisioner default-storageclass]
	I1002 19:50:58.925290  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:58.925319  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:58.925328  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:58.925334  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:58.928234  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:58.928254  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:58.928261  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:58.928267  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:58.928272  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:58 GMT
	I1002 19:50:58.928279  409972 round_trippers.go:580]     Audit-Id: 5a7af9bf-d6f8-4208-a7f1-052f97126608
	I1002 19:50:58.928287  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:58.928297  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:58.928628  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:50:58.928946  409972 node_ready.go:58] node "multinode-058614" has status "Ready":"False"
	I1002 19:50:59.425384  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:59.425414  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:59.425423  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:59.425429  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:59.428315  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:50:59.428335  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:59.428342  409972 round_trippers.go:580]     Audit-Id: 90c084e5-a827-437b-b894-d8ebd55ddf31
	I1002 19:50:59.428347  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:59.428352  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:59.428357  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:59.428362  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:59.428367  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:59 GMT
	I1002 19:50:59.428666  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:50:59.925348  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:50:59.925375  409972 round_trippers.go:469] Request Headers:
	I1002 19:50:59.925389  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:50:59.925395  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:50:59.929373  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:50:59.929400  409972 round_trippers.go:577] Response Headers:
	I1002 19:50:59.929409  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:50:59.929417  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:50:59.929424  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:50:59.929430  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:50:59.929438  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:50:59 GMT
	I1002 19:50:59.929446  409972 round_trippers.go:580]     Audit-Id: f2010b79-9871-49e5-b7fd-eec5507d0f39
	I1002 19:50:59.929930  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:00.425573  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:00.425599  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:00.425608  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:00.425615  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:00.428252  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:00.428273  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:00.428280  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:00.428287  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:00 GMT
	I1002 19:51:00.428298  409972 round_trippers.go:580]     Audit-Id: b749960e-bd5c-4f36-b8a5-1b1d602a4eeb
	I1002 19:51:00.428310  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:00.428319  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:00.428329  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:00.428484  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:00.925245  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:00.925274  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:00.925287  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:00.925298  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:00.927951  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:00.927979  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:00.927990  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:00.927998  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:00.928007  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:00 GMT
	I1002 19:51:00.928015  409972 round_trippers.go:580]     Audit-Id: 1d8d5a3a-c0a0-4a02-afa4-e0976a3c429a
	I1002 19:51:00.928023  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:00.928031  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:00.928298  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:01.426040  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:01.426068  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:01.426077  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:01.426083  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:01.429027  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:01.429048  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:01.429055  409972 round_trippers.go:580]     Audit-Id: 01d38a78-bcb0-4a00-9cb9-541bc49838a6
	I1002 19:51:01.429060  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:01.429065  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:01.429070  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:01.429076  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:01.429083  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:01 GMT
	I1002 19:51:01.429244  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:01.429565  409972 node_ready.go:58] node "multinode-058614" has status "Ready":"False"
	I1002 19:51:01.925618  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:01.925644  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:01.925652  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:01.925659  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:01.928318  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:01.928340  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:01.928351  409972 round_trippers.go:580]     Audit-Id: 9f87366d-8d6a-4c10-b7fe-927944e56229
	I1002 19:51:01.928358  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:01.928363  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:01.928368  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:01.928373  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:01.928379  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:01 GMT
	I1002 19:51:01.928568  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:02.425249  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:02.425273  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:02.425281  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:02.425293  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:02.428101  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:02.428128  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:02.428139  409972 round_trippers.go:580]     Audit-Id: 7313d268-572e-455f-9b40-e4ab8b041519
	I1002 19:51:02.428149  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:02.428158  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:02.428164  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:02.428180  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:02.428191  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:02 GMT
	I1002 19:51:02.428774  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:02.925385  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:02.925412  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:02.925423  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:02.925432  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:02.928606  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:02.928627  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:02.928633  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:02.928639  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:02.928644  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:02.928649  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:02 GMT
	I1002 19:51:02.928654  409972 round_trippers.go:580]     Audit-Id: e9832afe-bece-419c-95f6-0a52646a91ba
	I1002 19:51:02.928659  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:02.929145  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:03.425360  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:03.425387  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:03.425398  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:03.425407  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:03.428206  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:03.428230  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:03.428240  409972 round_trippers.go:580]     Audit-Id: b210b2f5-1d50-4bea-9051-ab45c2adeb3d
	I1002 19:51:03.428249  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:03.428264  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:03.428275  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:03.428286  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:03.428294  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:03 GMT
	I1002 19:51:03.428533  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:03.925178  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:03.925207  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:03.925218  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:03.925226  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:03.928540  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:03.928562  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:03.928568  409972 round_trippers.go:580]     Audit-Id: 0d4eef76-2ec2-4606-8cdf-4844049546ee
	I1002 19:51:03.928574  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:03.928579  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:03.928584  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:03.928589  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:03.928594  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:03 GMT
	I1002 19:51:03.929026  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:03.929694  409972 node_ready.go:58] node "multinode-058614" has status "Ready":"False"
	I1002 19:51:04.425727  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:04.425752  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:04.425775  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:04.425789  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:04.428739  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:04.428756  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:04.428763  409972 round_trippers.go:580]     Audit-Id: 08fcc3cd-3f11-494a-951b-875fbdcff620
	I1002 19:51:04.428768  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:04.428774  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:04.428782  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:04.428791  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:04.428801  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:04 GMT
	I1002 19:51:04.429109  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:04.925909  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:04.925937  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:04.925946  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:04.925952  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:04.929161  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:04.929191  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:04.929200  409972 round_trippers.go:580]     Audit-Id: decfba71-5338-4b8b-90a6-39e4289c3f6e
	I1002 19:51:04.929227  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:04.929235  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:04.929244  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:04.929254  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:04.929267  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:04 GMT
	I1002 19:51:04.929538  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:05.426003  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:05.426031  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:05.426039  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:05.426045  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:05.428847  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:05.428870  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:05.428877  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:05.428883  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:05.428888  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:05 GMT
	I1002 19:51:05.428893  409972 round_trippers.go:580]     Audit-Id: f34378db-a66d-4463-958f-677a7a93a5f4
	I1002 19:51:05.428898  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:05.428903  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:05.429479  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:05.925197  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:05.925227  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:05.925235  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:05.925242  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:05.928095  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:05.928123  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:05.928132  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:05.928140  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:05.928153  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:05 GMT
	I1002 19:51:05.928160  409972 round_trippers.go:580]     Audit-Id: 0c7613f5-dad3-405a-8bbb-8fa539e40503
	I1002 19:51:05.928167  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:05.928186  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:05.928488  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:06.425155  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:06.425187  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:06.425238  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:06.425248  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:06.428478  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:06.428501  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:06.428507  409972 round_trippers.go:580]     Audit-Id: ae5bd2f0-22e8-4694-aa91-3ae6cf8aa7a8
	I1002 19:51:06.428513  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:06.428518  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:06.428523  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:06.428528  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:06.428535  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:06 GMT
	I1002 19:51:06.428985  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:06.429319  409972 node_ready.go:58] node "multinode-058614" has status "Ready":"False"
	I1002 19:51:06.925470  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:06.925508  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:06.925517  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:06.925523  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:06.928573  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:06.928598  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:06.928608  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:06.928618  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:06.928627  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:06.928634  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:06 GMT
	I1002 19:51:06.928647  409972 round_trippers.go:580]     Audit-Id: 17a43964-084a-45d1-8086-17b80f528899
	I1002 19:51:06.928653  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:06.929005  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:07.425667  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:07.425692  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:07.425701  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:07.425706  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:07.428607  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:07.428633  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:07.428643  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:07.428652  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:07 GMT
	I1002 19:51:07.428660  409972 round_trippers.go:580]     Audit-Id: 33bcc0d5-0490-4d08-81ee-3e020c2d8b4a
	I1002 19:51:07.428669  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:07.428677  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:07.428690  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:07.429246  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:07.926064  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:07.926098  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:07.926108  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:07.926123  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:07.929052  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:07.929074  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:07.929081  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:07.929086  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:07.929091  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:07.929096  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:07 GMT
	I1002 19:51:07.929101  409972 round_trippers.go:580]     Audit-Id: 40ce5e82-8ca0-45b1-95de-27486524e2d9
	I1002 19:51:07.929106  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:07.930236  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:08.425917  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:08.425947  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:08.425956  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:08.425962  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:08.429201  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:08.429227  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:08.429238  409972 round_trippers.go:580]     Audit-Id: e0da8af2-88bc-47a8-be5a-61b81ee4a52b
	I1002 19:51:08.429245  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:08.429252  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:08.429259  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:08.429266  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:08.429275  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:08 GMT
	I1002 19:51:08.429545  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:08.430394  409972 node_ready.go:58] node "multinode-058614" has status "Ready":"False"
	I1002 19:51:08.925155  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:08.925199  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:08.925208  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:08.925215  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:08.928100  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:08.928128  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:08.928138  409972 round_trippers.go:580]     Audit-Id: 2becf5b9-981e-4246-80bd-052fb0bdc514
	I1002 19:51:08.928145  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:08.928152  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:08.928159  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:08.928166  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:08.928188  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:08 GMT
	I1002 19:51:08.928474  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:09.425182  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:09.425209  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:09.425218  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:09.425232  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:09.428518  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:09.428544  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:09.428553  409972 round_trippers.go:580]     Audit-Id: 822a09c0-5b3f-4e0b-b053-7f9679a0a235
	I1002 19:51:09.428559  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:09.428565  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:09.428570  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:09.428575  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:09.428580  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:09 GMT
	I1002 19:51:09.428922  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:09.925632  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:09.925662  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:09.925670  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:09.925676  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:09.928292  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:09.928320  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:09.928335  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:09.928343  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:09 GMT
	I1002 19:51:09.928348  409972 round_trippers.go:580]     Audit-Id: b0bb1a64-0e51-40c5-8f50-da36b30d686f
	I1002 19:51:09.928354  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:09.928359  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:09.928367  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:09.928516  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"350","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I1002 19:51:10.425252  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:10.425279  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.425288  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.425294  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.427602  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:10.427630  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.427642  409972 round_trippers.go:580]     Audit-Id: 50826370-a1f4-4b7a-83c8-d5210bf3628a
	I1002 19:51:10.427654  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.427662  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.427668  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.427673  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.427678  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.427939  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:10.428288  409972 node_ready.go:49] node "multinode-058614" has status "Ready":"True"
	I1002 19:51:10.428307  409972 node_ready.go:38] duration metric: took 13.524714101s waiting for node "multinode-058614" to be "Ready" ...
	I1002 19:51:10.428321  409972 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 19:51:10.428430  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:51:10.428441  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.428452  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.428462  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.431954  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:10.431970  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.431979  409972 round_trippers.go:580]     Audit-Id: 27edb140-a70d-4cb0-847c-2637632ca651
	I1002 19:51:10.431987  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.431996  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.432009  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.432016  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.432025  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.433215  409972 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"440"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53972 chars]
	I1002 19:51:10.436250  409972 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:10.436338  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:10.436349  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.436360  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.436371  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.438312  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:10.438329  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.438339  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.438348  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.438357  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.438363  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.438368  409972 round_trippers.go:580]     Audit-Id: 1790206f-8e98-4372-9c92-26f253b4824f
	I1002 19:51:10.438373  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.438522  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1002 19:51:10.439000  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:10.439014  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.439025  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.439034  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.440962  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:10.440979  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.440988  409972 round_trippers.go:580]     Audit-Id: c9d1f1a1-3282-4418-841f-44029d5dd91b
	I1002 19:51:10.440996  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.441003  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.441013  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.441022  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.441032  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.441204  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:10.441618  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:10.441632  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.441644  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.441654  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.443588  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:10.443603  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.443610  409972 round_trippers.go:580]     Audit-Id: eb583b9a-1d32-41f0-acec-3570b1809031
	I1002 19:51:10.443615  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.443621  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.443625  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.443630  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.443637  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.443845  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1002 19:51:10.444242  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:10.444256  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.444262  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.444268  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.446317  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:10.446332  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.446338  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.446343  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.446351  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.446359  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.446368  409972 round_trippers.go:580]     Audit-Id: 465c93e9-ad01-4416-98d2-d2d27b094d24
	I1002 19:51:10.446380  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.446527  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:10.947421  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:10.947465  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.947473  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.947479  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.952061  409972 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 19:51:10.952088  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.952098  409972 round_trippers.go:580]     Audit-Id: 7f668cef-5a12-4dad-a38b-75d5aee83317
	I1002 19:51:10.952106  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.952114  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.952122  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.952132  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.952146  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.952294  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1002 19:51:10.952737  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:10.952749  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:10.952758  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:10.952764  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:10.955761  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:10.955781  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:10.955787  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:10 GMT
	I1002 19:51:10.955792  409972 round_trippers.go:580]     Audit-Id: 5774f53a-1d71-48f7-a6f1-03c2142ec64e
	I1002 19:51:10.955797  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:10.955804  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:10.955813  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:10.955820  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:10.956045  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:11.447763  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:11.447786  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:11.447795  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:11.447800  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:11.464599  409972 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1002 19:51:11.464633  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:11.464644  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:11.464654  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:11.464663  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:11.464671  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:11 GMT
	I1002 19:51:11.464679  409972 round_trippers.go:580]     Audit-Id: 2fb1cb5c-ab65-42f2-9354-7652030f740e
	I1002 19:51:11.464687  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:11.465104  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1002 19:51:11.465600  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:11.465617  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:11.465624  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:11.465629  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:11.470338  409972 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 19:51:11.470364  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:11.470373  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:11.470381  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:11 GMT
	I1002 19:51:11.470390  409972 round_trippers.go:580]     Audit-Id: 3eee9c4c-9b40-43ae-a307-7755e357ff30
	I1002 19:51:11.470398  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:11.470406  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:11.470415  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:11.470540  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:11.947064  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:11.947091  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:11.947100  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:11.947107  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:11.952624  409972 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 19:51:11.952652  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:11.952663  409972 round_trippers.go:580]     Audit-Id: 7d07da29-2a42-4d4d-ae0b-26751080b181
	I1002 19:51:11.952671  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:11.952679  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:11.952687  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:11.952696  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:11.952708  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:11 GMT
	I1002 19:51:11.953856  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"439","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1002 19:51:11.954460  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:11.954482  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:11.954493  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:11.954503  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:11.956259  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:11.956275  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:11.956282  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:11.956288  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:11.956293  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:11.956309  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:11 GMT
	I1002 19:51:11.956324  409972 round_trippers.go:580]     Audit-Id: 0e2d7dae-387e-43e4-8215-ad7bce79f38b
	I1002 19:51:11.956332  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:11.956479  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.447094  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:51:12.447120  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.447133  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.447139  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.452504  409972 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 19:51:12.452526  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.452533  409972 round_trippers.go:580]     Audit-Id: f2ae639e-87ec-4a92-88d3-7fe1db25f693
	I1002 19:51:12.452541  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.452549  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.452557  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.452566  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.452574  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.452794  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"454","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1002 19:51:12.453286  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.453301  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.453309  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.453315  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.459041  409972 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 19:51:12.459070  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.459077  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.459082  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.459087  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.459093  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.459098  409972 round_trippers.go:580]     Audit-Id: 9641f0c1-d919-4698-a91b-f2218dbc711a
	I1002 19:51:12.459104  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.459944  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.460237  409972 pod_ready.go:92] pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:12.460252  409972 pod_ready.go:81] duration metric: took 2.023979335s waiting for pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.460261  409972 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.460319  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-058614
	I1002 19:51:12.460328  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.460334  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.460340  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.464564  409972 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1002 19:51:12.464582  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.464588  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.464593  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.464599  409972 round_trippers.go:580]     Audit-Id: ad0031c9-f886-41d3-8389-40aa9e944351
	I1002 19:51:12.464604  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.464611  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.464618  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.464955  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-058614","namespace":"kube-system","uid":"a28dcb7b-9677-46e1-bef8-a7fa010f156b","resourceVersion":"425","creationTimestamp":"2023-10-02T19:50:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.83:2379","kubernetes.io/config.hash":"106f287475a5843afbb16e738a4dd1f4","kubernetes.io/config.mirror":"106f287475a5843afbb16e738a4dd1f4","kubernetes.io/config.seen":"2023-10-02T19:50:36.758483921Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1002 19:51:12.465316  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.465326  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.465332  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.465338  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.470636  409972 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 19:51:12.470653  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.470659  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.470665  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.470672  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.470680  409972 round_trippers.go:580]     Audit-Id: 1bc81f98-9619-42e8-99c1-7886f1c1a8b8
	I1002 19:51:12.470689  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.470698  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.470830  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.471095  409972 pod_ready.go:92] pod "etcd-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:12.471108  409972 pod_ready.go:81] duration metric: took 10.841184ms waiting for pod "etcd-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.471120  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.471165  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-058614
	I1002 19:51:12.471172  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.471179  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.471184  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.473913  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:12.473928  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.473933  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.473938  409972 round_trippers.go:580]     Audit-Id: ea7a1664-e1f9-47e2-955d-078915eca247
	I1002 19:51:12.473943  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.473948  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.473955  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.473964  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.474097  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-058614","namespace":"kube-system","uid":"f0e4433a-e791-480f-8306-ecbdc6d3706f","resourceVersion":"422","creationTimestamp":"2023-10-02T19:50:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.83:8443","kubernetes.io/config.hash":"ad6af5517be9484355d3192cf7264036","kubernetes.io/config.mirror":"ad6af5517be9484355d3192cf7264036","kubernetes.io/config.seen":"2023-10-02T19:50:44.955487926Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1002 19:51:12.474457  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.474467  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.474474  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.474480  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.476196  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:12.476210  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.476218  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.476237  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.476245  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.476253  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.476262  409972 round_trippers.go:580]     Audit-Id: 887efd40-d77f-4c94-9f6f-82c089e8edf3
	I1002 19:51:12.476274  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.476375  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.476610  409972 pod_ready.go:92] pod "kube-apiserver-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:12.476621  409972 pod_ready.go:81] duration metric: took 5.495885ms waiting for pod "kube-apiserver-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.476629  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.476669  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-058614
	I1002 19:51:12.476681  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.476688  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.476694  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.478548  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:12.478559  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.478565  409972 round_trippers.go:580]     Audit-Id: fb9c6912-d550-4083-91a6-abb7a4674df8
	I1002 19:51:12.478570  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.478575  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.478580  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.478584  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.478589  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.478850  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-058614","namespace":"kube-system","uid":"5ed0ef01-4ceb-4702-9e56-ea5bd25d377d","resourceVersion":"423","creationTimestamp":"2023-10-02T19:50:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a6f177c5a53135e8813857ecd09e4546","kubernetes.io/config.mirror":"a6f177c5a53135e8813857ecd09e4546","kubernetes.io/config.seen":"2023-10-02T19:50:44.955482294Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1002 19:51:12.479160  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.479170  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.479176  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.479182  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.481014  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:12.481027  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.481033  409972 round_trippers.go:580]     Audit-Id: a9c77bf6-7b4e-4188-a6af-256754ca2761
	I1002 19:51:12.481038  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.481043  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.481047  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.481052  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.481057  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.481176  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.481414  409972 pod_ready.go:92] pod "kube-controller-manager-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:12.481425  409972 pod_ready.go:81] duration metric: took 4.79015ms waiting for pod "kube-controller-manager-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.481434  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8r7q6" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.625858  409972 request.go:629] Waited for 144.346864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8r7q6
	I1002 19:51:12.625953  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8r7q6
	I1002 19:51:12.625962  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.625974  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.625983  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.628952  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:12.628972  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.628979  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.628985  409972 round_trippers.go:580]     Audit-Id: bb80e7d6-9c97-4707-a69d-1b9602ec00d7
	I1002 19:51:12.628990  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.628995  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.629000  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.629005  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.629381  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8r7q6","generateName":"kube-proxy-","namespace":"kube-system","uid":"075b91f3-9483-4bb8-9afd-dec07038f014","resourceVersion":"418","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7554e598-7b2a-499d-95f7-df0eaaed9e8a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7554e598-7b2a-499d-95f7-df0eaaed9e8a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1002 19:51:12.825253  409972 request.go:629] Waited for 195.313664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.825332  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:12.825337  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:12.825345  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:12.825354  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:12.828149  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:12.828199  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:12.828224  409972 round_trippers.go:580]     Audit-Id: abc3bd19-693d-408e-9acf-37bc5ed08c0b
	I1002 19:51:12.828234  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:12.828243  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:12.828252  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:12.828260  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:12.828267  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:12 GMT
	I1002 19:51:12.828400  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:12.828692  409972 pod_ready.go:92] pod "kube-proxy-8r7q6" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:12.828705  409972 pod_ready.go:81] duration metric: took 347.266032ms waiting for pod "kube-proxy-8r7q6" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:12.828715  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:13.026176  409972 request.go:629] Waited for 197.371879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-058614
	I1002 19:51:13.026245  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-058614
	I1002 19:51:13.026250  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.026258  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.026264  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.029240  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:13.029264  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.029271  409972 round_trippers.go:580]     Audit-Id: 9256dd33-5961-4732-8d47-52ddc8f332ad
	I1002 19:51:13.029277  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.029282  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.029287  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.029292  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.029297  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.029587  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-058614","namespace":"kube-system","uid":"f18491bf-ec7a-41bc-b666-1553594afa9a","resourceVersion":"424","creationTimestamp":"2023-10-02T19:50:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68d95ca928ce5c338c3970f4212341e1","kubernetes.io/config.mirror":"68d95ca928ce5c338c3970f4212341e1","kubernetes.io/config.seen":"2023-10-02T19:50:36.758482865Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1002 19:51:13.226101  409972 request.go:629] Waited for 196.019927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:13.226211  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:51:13.226223  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.226234  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.226243  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.229188  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:13.229220  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.229233  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.229243  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.229251  409972 round_trippers.go:580]     Audit-Id: 2b78fc66-0b8b-4d5b-94d8-ed0cf42ddf3c
	I1002 19:51:13.229260  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.229271  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.229279  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.229492  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I1002 19:51:13.229826  409972 pod_ready.go:92] pod "kube-scheduler-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:51:13.229847  409972 pod_ready.go:81] duration metric: took 401.122002ms waiting for pod "kube-scheduler-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:51:13.229862  409972 pod_ready.go:38] duration metric: took 2.801500196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 19:51:13.229890  409972 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:51:13.229975  409972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:51:13.243202  409972 command_runner.go:130] > 1849
	I1002 19:51:13.243303  409972 api_server.go:72] duration metric: took 16.49779679s to wait for apiserver process to appear ...
	I1002 19:51:13.243330  409972 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:51:13.243355  409972 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1002 19:51:13.248084  409972 api_server.go:279] https://192.168.39.83:8443/healthz returned 200:
	ok
	I1002 19:51:13.248145  409972 round_trippers.go:463] GET https://192.168.39.83:8443/version
	I1002 19:51:13.248156  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.248164  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.248180  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.249336  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:51:13.249354  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.249363  409972 round_trippers.go:580]     Content-Length: 263
	I1002 19:51:13.249371  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.249379  409972 round_trippers.go:580]     Audit-Id: 573cfb18-c4aa-42cb-9532-27828bc499ce
	I1002 19:51:13.249392  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.249405  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.249417  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.249429  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.249457  409972 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1002 19:51:13.249546  409972 api_server.go:141] control plane version: v1.28.2
	I1002 19:51:13.249564  409972 api_server.go:131] duration metric: took 6.226871ms to wait for apiserver health ...
	I1002 19:51:13.249575  409972 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:51:13.426260  409972 request.go:629] Waited for 176.603682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:51:13.426335  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:51:13.426340  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.426348  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.426357  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.429751  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:13.429774  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.429781  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.429786  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.429792  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.429797  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.429802  409972 round_trippers.go:580]     Audit-Id: fd2e3998-c452-46b2-9cef-66472f12ff12
	I1002 19:51:13.429807  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.431278  409972 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"454","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1002 19:51:13.433670  409972 system_pods.go:59] 8 kube-system pods found
	I1002 19:51:13.433698  409972 system_pods.go:61] "coredns-5dd5756b68-ssbfx" [f646d313-d0bd-4b09-9968-ff0d119dfae3] Running
	I1002 19:51:13.433706  409972 system_pods.go:61] "etcd-multinode-058614" [a28dcb7b-9677-46e1-bef8-a7fa010f156b] Running
	I1002 19:51:13.433713  409972 system_pods.go:61] "kindnet-h5ml2" [5a69d1f9-8152-4743-9aa0-b9ebe989d32a] Running
	I1002 19:51:13.433719  409972 system_pods.go:61] "kube-apiserver-multinode-058614" [f0e4433a-e791-480f-8306-ecbdc6d3706f] Running
	I1002 19:51:13.433726  409972 system_pods.go:61] "kube-controller-manager-multinode-058614" [5ed0ef01-4ceb-4702-9e56-ea5bd25d377d] Running
	I1002 19:51:13.433730  409972 system_pods.go:61] "kube-proxy-8r7q6" [075b91f3-9483-4bb8-9afd-dec07038f014] Running
	I1002 19:51:13.433740  409972 system_pods.go:61] "kube-scheduler-multinode-058614" [f18491bf-ec7a-41bc-b666-1553594afa9a] Running
	I1002 19:51:13.433747  409972 system_pods.go:61] "storage-provisioner" [6107368d-ae74-461e-a41c-fd7cefe35161] Running
	I1002 19:51:13.433762  409972 system_pods.go:74] duration metric: took 184.180089ms to wait for pod list to return data ...
	I1002 19:51:13.433771  409972 default_sa.go:34] waiting for default service account to be created ...
	I1002 19:51:13.626191  409972 request.go:629] Waited for 192.337495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/default/serviceaccounts
	I1002 19:51:13.626275  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/default/serviceaccounts
	I1002 19:51:13.626282  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.626289  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.626297  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.628961  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:13.628980  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.628987  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.628994  409972 round_trippers.go:580]     Content-Length: 261
	I1002 19:51:13.628999  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.629004  409972 round_trippers.go:580]     Audit-Id: 77d2cf42-55ca-45f4-93d3-8d055ba8c7c0
	I1002 19:51:13.629010  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.629015  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.629021  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.629046  409972 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"dbac398b-43bb-4eb3-9474-f4c072b2dbce","resourceVersion":"344","creationTimestamp":"2023-10-02T19:50:56Z"}}]}
	I1002 19:51:13.629291  409972 default_sa.go:45] found service account: "default"
	I1002 19:51:13.629321  409972 default_sa.go:55] duration metric: took 195.540165ms for default service account to be created ...
	I1002 19:51:13.629330  409972 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 19:51:13.825814  409972 request.go:629] Waited for 196.405164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:51:13.825898  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:51:13.825908  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:13.825920  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:13.825934  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:13.829804  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:13.829831  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:13.829841  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:13.829849  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:13.829857  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:13.829866  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:13 GMT
	I1002 19:51:13.829875  409972 round_trippers.go:580]     Audit-Id: 47ade420-cd98-4720-a2b6-7f7f742f32f3
	I1002 19:51:13.829884  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:13.830997  409972 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"454","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I1002 19:51:13.832673  409972 system_pods.go:86] 8 kube-system pods found
	I1002 19:51:13.832700  409972 system_pods.go:89] "coredns-5dd5756b68-ssbfx" [f646d313-d0bd-4b09-9968-ff0d119dfae3] Running
	I1002 19:51:13.832708  409972 system_pods.go:89] "etcd-multinode-058614" [a28dcb7b-9677-46e1-bef8-a7fa010f156b] Running
	I1002 19:51:13.832714  409972 system_pods.go:89] "kindnet-h5ml2" [5a69d1f9-8152-4743-9aa0-b9ebe989d32a] Running
	I1002 19:51:13.832720  409972 system_pods.go:89] "kube-apiserver-multinode-058614" [f0e4433a-e791-480f-8306-ecbdc6d3706f] Running
	I1002 19:51:13.832727  409972 system_pods.go:89] "kube-controller-manager-multinode-058614" [5ed0ef01-4ceb-4702-9e56-ea5bd25d377d] Running
	I1002 19:51:13.832733  409972 system_pods.go:89] "kube-proxy-8r7q6" [075b91f3-9483-4bb8-9afd-dec07038f014] Running
	I1002 19:51:13.832741  409972 system_pods.go:89] "kube-scheduler-multinode-058614" [f18491bf-ec7a-41bc-b666-1553594afa9a] Running
	I1002 19:51:13.832750  409972 system_pods.go:89] "storage-provisioner" [6107368d-ae74-461e-a41c-fd7cefe35161] Running
	I1002 19:51:13.832765  409972 system_pods.go:126] duration metric: took 203.425236ms to wait for k8s-apps to be running ...
	I1002 19:51:13.832780  409972 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:51:13.832835  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:51:13.845848  409972 system_svc.go:56] duration metric: took 13.059198ms WaitForService to wait for kubelet.
	I1002 19:51:13.845873  409972 kubeadm.go:581] duration metric: took 17.1003837s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 19:51:13.845898  409972 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:51:14.025297  409972 request.go:629] Waited for 179.297314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes
	I1002 19:51:14.025447  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes
	I1002 19:51:14.025465  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:14.025479  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:14.025490  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:14.028517  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:14.028540  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:14.028550  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:14.028559  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:14.028567  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:14.028573  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:14 GMT
	I1002 19:51:14.028580  409972 round_trippers.go:580]     Audit-Id: c8900e04-835d-4e07-853e-4dc9693d6a58
	I1002 19:51:14.028586  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:14.028698  409972 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"434","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I1002 19:51:14.029155  409972 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 19:51:14.029183  409972 node_conditions.go:123] node cpu capacity is 2
	I1002 19:51:14.029198  409972 node_conditions.go:105] duration metric: took 183.29476ms to run NodePressure ...
	I1002 19:51:14.029210  409972 start.go:228] waiting for startup goroutines ...
	I1002 19:51:14.029222  409972 start.go:233] waiting for cluster config update ...
	I1002 19:51:14.029232  409972 start.go:242] writing updated cluster config ...
	I1002 19:51:14.031401  409972 out.go:177] 
	I1002 19:51:14.033005  409972 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:51:14.033073  409972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:51:14.034809  409972 out.go:177] * Starting worker node multinode-058614-m02 in cluster multinode-058614
	I1002 19:51:14.036075  409972 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:51:14.036099  409972 cache.go:57] Caching tarball of preloaded images
	I1002 19:51:14.036186  409972 preload.go:174] Found /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:51:14.036195  409972 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:51:14.036256  409972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:51:14.036406  409972 start.go:365] acquiring machines lock for multinode-058614-m02: {Name:mk4eec10b828b68be104dfa4b7220ed2aea8b62b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:51:14.036449  409972 start.go:369] acquired machines lock for "multinode-058614-m02" in 24.018µs
	I1002 19:51:14.036466  409972 start.go:93] Provisioning new machine with config: &{Name:multinode-058614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReque
sted:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 19:51:14.036526  409972 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1002 19:51:14.038203  409972 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 19:51:14.038290  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:51:14.038327  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:51:14.052578  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39627
	I1002 19:51:14.053035  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:51:14.053523  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:51:14.053546  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:51:14.054026  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:51:14.054264  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetMachineName
	I1002 19:51:14.054453  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:14.054609  409972 start.go:159] libmachine.API.Create for "multinode-058614" (driver="kvm2")
	I1002 19:51:14.054651  409972 client.go:168] LocalClient.Create starting
	I1002 19:51:14.054690  409972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem
	I1002 19:51:14.054727  409972 main.go:141] libmachine: Decoding PEM data...
	I1002 19:51:14.054747  409972 main.go:141] libmachine: Parsing certificate...
	I1002 19:51:14.054817  409972 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem
	I1002 19:51:14.054842  409972 main.go:141] libmachine: Decoding PEM data...
	I1002 19:51:14.054860  409972 main.go:141] libmachine: Parsing certificate...
	I1002 19:51:14.054886  409972 main.go:141] libmachine: Running pre-create checks...
	I1002 19:51:14.054899  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .PreCreateCheck
	I1002 19:51:14.055056  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetConfigRaw
	I1002 19:51:14.055401  409972 main.go:141] libmachine: Creating machine...
	I1002 19:51:14.055416  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .Create
	I1002 19:51:14.055587  409972 main.go:141] libmachine: (multinode-058614-m02) Creating KVM machine...
	I1002 19:51:14.056758  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found existing default KVM network
	I1002 19:51:14.056889  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found existing private KVM network mk-multinode-058614
	I1002 19:51:14.056987  409972 main.go:141] libmachine: (multinode-058614-m02) Setting up store path in /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02 ...
	I1002 19:51:14.057016  409972 main.go:141] libmachine: (multinode-058614-m02) Building disk image from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 19:51:14.057106  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:14.056988  410364 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:51:14.057188  409972 main.go:141] libmachine: (multinode-058614-m02) Downloading /home/jenkins/minikube-integration/17323-390762/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 19:51:14.287082  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:14.286940  410364 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa...
	I1002 19:51:14.372269  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:14.372105  410364 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/multinode-058614-m02.rawdisk...
	I1002 19:51:14.372313  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Writing magic tar header
	I1002 19:51:14.372331  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Writing SSH key tar header
	I1002 19:51:14.372348  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:14.372266  410364 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02 ...
	I1002 19:51:14.372447  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02
	I1002 19:51:14.372476  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines
	I1002 19:51:14.372491  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02 (perms=drwx------)
	I1002 19:51:14.372514  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines (perms=drwxr-xr-x)
	I1002 19:51:14.372529  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube (perms=drwxr-xr-x)
	I1002 19:51:14.372544  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762 (perms=drwxrwxr-x)
	I1002 19:51:14.372560  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:51:14.372575  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 19:51:14.372585  409972 main.go:141] libmachine: (multinode-058614-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 19:51:14.372592  409972 main.go:141] libmachine: (multinode-058614-m02) Creating domain...
	I1002 19:51:14.372603  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762
	I1002 19:51:14.372610  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 19:51:14.372617  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home/jenkins
	I1002 19:51:14.372626  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Checking permissions on dir: /home
	I1002 19:51:14.372660  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Skipping /home - not owner
	I1002 19:51:14.373642  409972 main.go:141] libmachine: (multinode-058614-m02) define libvirt domain using xml: 
	I1002 19:51:14.373674  409972 main.go:141] libmachine: (multinode-058614-m02) <domain type='kvm'>
	I1002 19:51:14.373687  409972 main.go:141] libmachine: (multinode-058614-m02)   <name>multinode-058614-m02</name>
	I1002 19:51:14.373698  409972 main.go:141] libmachine: (multinode-058614-m02)   <memory unit='MiB'>2200</memory>
	I1002 19:51:14.373708  409972 main.go:141] libmachine: (multinode-058614-m02)   <vcpu>2</vcpu>
	I1002 19:51:14.373717  409972 main.go:141] libmachine: (multinode-058614-m02)   <features>
	I1002 19:51:14.373745  409972 main.go:141] libmachine: (multinode-058614-m02)     <acpi/>
	I1002 19:51:14.373756  409972 main.go:141] libmachine: (multinode-058614-m02)     <apic/>
	I1002 19:51:14.373770  409972 main.go:141] libmachine: (multinode-058614-m02)     <pae/>
	I1002 19:51:14.373786  409972 main.go:141] libmachine: (multinode-058614-m02)     
	I1002 19:51:14.373798  409972 main.go:141] libmachine: (multinode-058614-m02)   </features>
	I1002 19:51:14.373808  409972 main.go:141] libmachine: (multinode-058614-m02)   <cpu mode='host-passthrough'>
	I1002 19:51:14.373821  409972 main.go:141] libmachine: (multinode-058614-m02)   
	I1002 19:51:14.373837  409972 main.go:141] libmachine: (multinode-058614-m02)   </cpu>
	I1002 19:51:14.373850  409972 main.go:141] libmachine: (multinode-058614-m02)   <os>
	I1002 19:51:14.373861  409972 main.go:141] libmachine: (multinode-058614-m02)     <type>hvm</type>
	I1002 19:51:14.373875  409972 main.go:141] libmachine: (multinode-058614-m02)     <boot dev='cdrom'/>
	I1002 19:51:14.373911  409972 main.go:141] libmachine: (multinode-058614-m02)     <boot dev='hd'/>
	I1002 19:51:14.373933  409972 main.go:141] libmachine: (multinode-058614-m02)     <bootmenu enable='no'/>
	I1002 19:51:14.373952  409972 main.go:141] libmachine: (multinode-058614-m02)   </os>
	I1002 19:51:14.373968  409972 main.go:141] libmachine: (multinode-058614-m02)   <devices>
	I1002 19:51:14.373983  409972 main.go:141] libmachine: (multinode-058614-m02)     <disk type='file' device='cdrom'>
	I1002 19:51:14.374008  409972 main.go:141] libmachine: (multinode-058614-m02)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/boot2docker.iso'/>
	I1002 19:51:14.374017  409972 main.go:141] libmachine: (multinode-058614-m02)       <target dev='hdc' bus='scsi'/>
	I1002 19:51:14.374025  409972 main.go:141] libmachine: (multinode-058614-m02)       <readonly/>
	I1002 19:51:14.374031  409972 main.go:141] libmachine: (multinode-058614-m02)     </disk>
	I1002 19:51:14.374041  409972 main.go:141] libmachine: (multinode-058614-m02)     <disk type='file' device='disk'>
	I1002 19:51:14.374078  409972 main.go:141] libmachine: (multinode-058614-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 19:51:14.374114  409972 main.go:141] libmachine: (multinode-058614-m02)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/multinode-058614-m02.rawdisk'/>
	I1002 19:51:14.374133  409972 main.go:141] libmachine: (multinode-058614-m02)       <target dev='hda' bus='virtio'/>
	I1002 19:51:14.374146  409972 main.go:141] libmachine: (multinode-058614-m02)     </disk>
	I1002 19:51:14.374159  409972 main.go:141] libmachine: (multinode-058614-m02)     <interface type='network'>
	I1002 19:51:14.374174  409972 main.go:141] libmachine: (multinode-058614-m02)       <source network='mk-multinode-058614'/>
	I1002 19:51:14.374189  409972 main.go:141] libmachine: (multinode-058614-m02)       <model type='virtio'/>
	I1002 19:51:14.374206  409972 main.go:141] libmachine: (multinode-058614-m02)     </interface>
	I1002 19:51:14.374222  409972 main.go:141] libmachine: (multinode-058614-m02)     <interface type='network'>
	I1002 19:51:14.374235  409972 main.go:141] libmachine: (multinode-058614-m02)       <source network='default'/>
	I1002 19:51:14.374250  409972 main.go:141] libmachine: (multinode-058614-m02)       <model type='virtio'/>
	I1002 19:51:14.374261  409972 main.go:141] libmachine: (multinode-058614-m02)     </interface>
	I1002 19:51:14.374285  409972 main.go:141] libmachine: (multinode-058614-m02)     <serial type='pty'>
	I1002 19:51:14.374307  409972 main.go:141] libmachine: (multinode-058614-m02)       <target port='0'/>
	I1002 19:51:14.374322  409972 main.go:141] libmachine: (multinode-058614-m02)     </serial>
	I1002 19:51:14.374335  409972 main.go:141] libmachine: (multinode-058614-m02)     <console type='pty'>
	I1002 19:51:14.374355  409972 main.go:141] libmachine: (multinode-058614-m02)       <target type='serial' port='0'/>
	I1002 19:51:14.374366  409972 main.go:141] libmachine: (multinode-058614-m02)     </console>
	I1002 19:51:14.374373  409972 main.go:141] libmachine: (multinode-058614-m02)     <rng model='virtio'>
	I1002 19:51:14.374390  409972 main.go:141] libmachine: (multinode-058614-m02)       <backend model='random'>/dev/random</backend>
	I1002 19:51:14.374404  409972 main.go:141] libmachine: (multinode-058614-m02)     </rng>
	I1002 19:51:14.374416  409972 main.go:141] libmachine: (multinode-058614-m02)     
	I1002 19:51:14.374428  409972 main.go:141] libmachine: (multinode-058614-m02)     
	I1002 19:51:14.374440  409972 main.go:141] libmachine: (multinode-058614-m02)   </devices>
	I1002 19:51:14.374453  409972 main.go:141] libmachine: (multinode-058614-m02) </domain>
	I1002 19:51:14.374469  409972 main.go:141] libmachine: (multinode-058614-m02) 
	I1002 19:51:14.381148  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:8f:9f:eb in network default
	I1002 19:51:14.381698  409972 main.go:141] libmachine: (multinode-058614-m02) Ensuring networks are active...
	I1002 19:51:14.381714  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:14.382426  409972 main.go:141] libmachine: (multinode-058614-m02) Ensuring network default is active
	I1002 19:51:14.382720  409972 main.go:141] libmachine: (multinode-058614-m02) Ensuring network mk-multinode-058614 is active
	I1002 19:51:14.383144  409972 main.go:141] libmachine: (multinode-058614-m02) Getting domain xml...
	I1002 19:51:14.383995  409972 main.go:141] libmachine: (multinode-058614-m02) Creating domain...
	I1002 19:51:15.660818  409972 main.go:141] libmachine: (multinode-058614-m02) Waiting to get IP...
	I1002 19:51:15.661654  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:15.662026  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:15.662121  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:15.662042  410364 retry.go:31] will retry after 240.373979ms: waiting for machine to come up
	I1002 19:51:15.904593  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:15.905136  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:15.905167  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:15.905081  410364 retry.go:31] will retry after 316.30681ms: waiting for machine to come up
	I1002 19:51:16.222727  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:16.223348  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:16.223374  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:16.223307  410364 retry.go:31] will retry after 486.867403ms: waiting for machine to come up
	I1002 19:51:16.711938  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:16.712447  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:16.712471  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:16.712389  410364 retry.go:31] will retry after 574.868835ms: waiting for machine to come up
	I1002 19:51:17.289313  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:17.289716  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:17.289741  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:17.289662  410364 retry.go:31] will retry after 682.49661ms: waiting for machine to come up
	I1002 19:51:17.973788  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:17.974306  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:17.974341  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:17.974270  410364 retry.go:31] will retry after 672.69009ms: waiting for machine to come up
	I1002 19:51:18.648264  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:18.648727  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:18.648757  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:18.648670  410364 retry.go:31] will retry after 929.780819ms: waiting for machine to come up
	I1002 19:51:19.580550  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:19.581020  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:19.581043  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:19.580976  410364 retry.go:31] will retry after 972.558401ms: waiting for machine to come up
	I1002 19:51:20.555206  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:20.555685  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:20.555714  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:20.555646  410364 retry.go:31] will retry after 1.368104655s: waiting for machine to come up
	I1002 19:51:21.925599  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:21.926079  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:21.926108  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:21.925999  410364 retry.go:31] will retry after 1.911224752s: waiting for machine to come up
	I1002 19:51:23.840119  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:23.840599  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:23.840630  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:23.840537  410364 retry.go:31] will retry after 2.840407804s: waiting for machine to come up
	I1002 19:51:26.684128  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:26.684511  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:26.684592  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:26.684484  410364 retry.go:31] will retry after 2.71208264s: waiting for machine to come up
	I1002 19:51:29.397919  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:29.398390  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:29.398417  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:29.398311  410364 retry.go:31] will retry after 3.828049025s: waiting for machine to come up
	I1002 19:51:33.231201  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:33.231683  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find current IP address of domain multinode-058614-m02 in network mk-multinode-058614
	I1002 19:51:33.231718  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | I1002 19:51:33.231627  410364 retry.go:31] will retry after 3.504022385s: waiting for machine to come up
	I1002 19:51:36.737853  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.738276  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has current primary IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.738313  409972 main.go:141] libmachine: (multinode-058614-m02) Found IP for machine: 192.168.39.104
	I1002 19:51:36.738328  409972 main.go:141] libmachine: (multinode-058614-m02) Reserving static IP address...
	I1002 19:51:36.738850  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | unable to find host DHCP lease matching {name: "multinode-058614-m02", mac: "52:54:00:fb:71:7c", ip: "192.168.39.104"} in network mk-multinode-058614
	I1002 19:51:36.814158  409972 main.go:141] libmachine: (multinode-058614-m02) Reserved static IP address: 192.168.39.104
	I1002 19:51:36.814196  409972 main.go:141] libmachine: (multinode-058614-m02) Waiting for SSH to be available...
	I1002 19:51:36.814207  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Getting to WaitForSSH function...
	I1002 19:51:36.816724  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.817107  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:36.817147  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.817334  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Using SSH client type: external
	I1002 19:51:36.817377  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa (-rw-------)
	I1002 19:51:36.817415  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:51:36.817432  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | About to run SSH command:
	I1002 19:51:36.817450  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | exit 0
	I1002 19:51:36.907366  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | SSH cmd err, output: <nil>: 
	I1002 19:51:36.907668  409972 main.go:141] libmachine: (multinode-058614-m02) KVM machine creation complete!
	I1002 19:51:36.907913  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetConfigRaw
	I1002 19:51:36.908435  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:36.908628  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:36.908809  409972 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 19:51:36.908829  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetState
	I1002 19:51:36.910179  409972 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 19:51:36.910193  409972 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 19:51:36.910199  409972 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 19:51:36.910206  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:36.912569  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.912986  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:36.913028  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:36.913141  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:36.913289  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:36.913441  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:36.913634  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:36.913806  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:36.914206  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:36.914217  409972 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 19:51:37.030900  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:51:37.030929  409972 main.go:141] libmachine: Detecting the provisioner...
	I1002 19:51:37.030938  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.033776  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.034182  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.034218  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.034382  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.034623  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.034760  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.034918  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.035061  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:37.035402  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:37.035417  409972 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 19:51:37.152377  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 19:51:37.152466  409972 main.go:141] libmachine: found compatible host: buildroot
	I1002 19:51:37.152480  409972 main.go:141] libmachine: Provisioning with buildroot...
	I1002 19:51:37.152496  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetMachineName
	I1002 19:51:37.152862  409972 buildroot.go:166] provisioning hostname "multinode-058614-m02"
	I1002 19:51:37.152896  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetMachineName
	I1002 19:51:37.153109  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.155970  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.156326  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.156350  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.156554  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.156757  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.156933  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.157040  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.157235  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:37.157601  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:37.157616  409972 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-058614-m02 && echo "multinode-058614-m02" | sudo tee /etc/hostname
	I1002 19:51:37.284145  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-058614-m02
	
	I1002 19:51:37.284183  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.287308  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.287755  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.287793  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.287953  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.288134  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.288343  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.288486  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.288682  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:37.289029  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:37.289051  409972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-058614-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-058614-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-058614-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:51:37.416408  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:51:37.416445  409972 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17323-390762/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-390762/.minikube}
	I1002 19:51:37.416472  409972 buildroot.go:174] setting up certificates
	I1002 19:51:37.416483  409972 provision.go:83] configureAuth start
	I1002 19:51:37.416493  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetMachineName
	I1002 19:51:37.416820  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetIP
	I1002 19:51:37.419560  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.419945  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.419979  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.420151  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.422686  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.423009  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.423038  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.423159  409972 provision.go:138] copyHostCerts
	I1002 19:51:37.423196  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
	I1002 19:51:37.423240  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem, removing ...
	I1002 19:51:37.423258  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem
	I1002 19:51:37.423345  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/ca.pem (1078 bytes)
	I1002 19:51:37.423485  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
	I1002 19:51:37.423513  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem, removing ...
	I1002 19:51:37.423522  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem
	I1002 19:51:37.423552  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/cert.pem (1123 bytes)
	I1002 19:51:37.423604  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
	I1002 19:51:37.423620  409972 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem, removing ...
	I1002 19:51:37.423624  409972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem
	I1002 19:51:37.423646  409972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-390762/.minikube/key.pem (1675 bytes)
	I1002 19:51:37.423738  409972 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem org=jenkins.multinode-058614-m02 san=[192.168.39.104 192.168.39.104 localhost 127.0.0.1 minikube multinode-058614-m02]
	I1002 19:51:37.597299  409972 provision.go:172] copyRemoteCerts
	I1002 19:51:37.597359  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:51:37.597387  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.601429  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.601827  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.601865  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.602007  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.602215  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.602393  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.602534  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa Username:docker}
	I1002 19:51:37.691121  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 19:51:37.691207  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:51:37.714001  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 19:51:37.714067  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 19:51:37.737897  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 19:51:37.737969  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 19:51:37.762185  409972 provision.go:86] duration metric: configureAuth took 345.686325ms
	I1002 19:51:37.762213  409972 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:51:37.762441  409972 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:51:37.762472  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:37.762795  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.765665  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.766061  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.766095  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.766237  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.766442  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.766604  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.766808  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.766998  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:37.767388  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:37.767402  409972 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:51:37.884911  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:51:37.884944  409972 buildroot.go:70] root file system type: tmpfs
	I1002 19:51:37.885453  409972 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:51:37.885493  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:37.889572  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.889924  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:37.889967  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:37.890117  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:37.890312  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.890452  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:37.890559  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:37.890751  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:37.891076  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:37.891137  409972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.83"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:51:38.024390  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.83
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:51:38.024433  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:38.027243  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.027669  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:38.027704  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.027869  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:38.028056  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:38.028230  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:38.028381  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:38.028572  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:38.028938  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:38.028960  409972 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:51:38.830209  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:51:38.830257  409972 main.go:141] libmachine: Checking connection to Docker...
	I1002 19:51:38.830277  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetURL
	I1002 19:51:38.831766  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | Using libvirt version 6000000
	I1002 19:51:38.834051  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.834428  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:38.834466  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.834622  409972 main.go:141] libmachine: Docker is up and running!
	I1002 19:51:38.834641  409972 main.go:141] libmachine: Reticulating splines...
	I1002 19:51:38.834649  409972 client.go:171] LocalClient.Create took 24.779987744s
	I1002 19:51:38.834672  409972 start.go:167] duration metric: libmachine.API.Create for "multinode-058614" took 24.780065955s
	I1002 19:51:38.834680  409972 start.go:300] post-start starting for "multinode-058614-m02" (driver="kvm2")
	I1002 19:51:38.834690  409972 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:51:38.834716  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:38.835031  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:51:38.835055  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:38.837343  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.837786  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:38.837819  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.837996  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:38.838227  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:38.838388  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:38.838590  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa Username:docker}
	I1002 19:51:38.925308  409972 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:51:38.929849  409972 command_runner.go:130] > NAME=Buildroot
	I1002 19:51:38.929867  409972 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1002 19:51:38.929872  409972 command_runner.go:130] > ID=buildroot
	I1002 19:51:38.929877  409972 command_runner.go:130] > VERSION_ID=2021.02.12
	I1002 19:51:38.929882  409972 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1002 19:51:38.930139  409972 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 19:51:38.930160  409972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/addons for local assets ...
	I1002 19:51:38.930238  409972 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-390762/.minikube/files for local assets ...
	I1002 19:51:38.930331  409972 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> 3979952.pem in /etc/ssl/certs
	I1002 19:51:38.930342  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> /etc/ssl/certs/3979952.pem
	I1002 19:51:38.930447  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:51:38.939159  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /etc/ssl/certs/3979952.pem (1708 bytes)
	I1002 19:51:38.964743  409972 start.go:303] post-start completed in 130.047421ms
	I1002 19:51:38.964839  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetConfigRaw
	I1002 19:51:38.965433  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetIP
	I1002 19:51:38.968147  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.968530  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:38.968567  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.968871  409972 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/config.json ...
	I1002 19:51:38.969048  409972 start.go:128] duration metric: createHost completed in 24.932513301s
	I1002 19:51:38.969071  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:38.971497  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.971862  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:38.971888  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:38.972003  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:38.972194  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:38.972344  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:38.972468  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:38.972600  409972 main.go:141] libmachine: Using SSH client type: native
	I1002 19:51:38.972900  409972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1002 19:51:38.972910  409972 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 19:51:39.088246  409972 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696276299.059774307
	
	I1002 19:51:39.088270  409972 fix.go:206] guest clock: 1696276299.059774307
	I1002 19:51:39.088278  409972 fix.go:219] Guest: 2023-10-02 19:51:39.059774307 +0000 UTC Remote: 2023-10-02 19:51:38.969060164 +0000 UTC m=+100.796753140 (delta=90.714143ms)
	I1002 19:51:39.088294  409972 fix.go:190] guest clock delta is within tolerance: 90.714143ms
	I1002 19:51:39.088299  409972 start.go:83] releasing machines lock for "multinode-058614-m02", held for 25.051841073s
	I1002 19:51:39.088321  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:39.088632  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetIP
	I1002 19:51:39.091554  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.091925  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:39.091962  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.094318  409972 out.go:177] * Found network options:
	I1002 19:51:39.095654  409972 out.go:177]   - NO_PROXY=192.168.39.83
	W1002 19:51:39.096751  409972 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 19:51:39.096776  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:39.097308  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:39.097533  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:51:39.097640  409972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:51:39.097674  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	W1002 19:51:39.097765  409972 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 19:51:39.097882  409972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 19:51:39.097910  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:51:39.100506  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.100896  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:39.100928  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.100954  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.101099  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:39.101279  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:39.101385  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:39.101419  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:39.101625  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:51:39.101645  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:39.101812  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:51:39.101824  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa Username:docker}
	I1002 19:51:39.101940  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:51:39.102096  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa Username:docker}
	I1002 19:51:39.209888  409972 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 19:51:39.209949  409972 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 19:51:39.209979  409972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:51:39.210049  409972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:51:39.225178  409972 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1002 19:51:39.225251  409972 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:51:39.225270  409972 start.go:469] detecting cgroup driver to use...
	I1002 19:51:39.225440  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:51:39.243925  409972 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1002 19:51:39.244357  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 19:51:39.253653  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:51:39.262837  409972 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:51:39.262886  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:51:39.273721  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:51:39.282763  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:51:39.291836  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:51:39.301897  409972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:51:39.311705  409972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:51:39.320957  409972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:51:39.329201  409972 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 19:51:39.329387  409972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:51:39.337538  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:51:39.436552  409972 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:51:39.454828  409972 start.go:469] detecting cgroup driver to use...
	I1002 19:51:39.454939  409972 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:51:39.467839  409972 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1002 19:51:39.467942  409972 command_runner.go:130] > [Unit]
	I1002 19:51:39.467966  409972 command_runner.go:130] > Description=Docker Application Container Engine
	I1002 19:51:39.467973  409972 command_runner.go:130] > Documentation=https://docs.docker.com
	I1002 19:51:39.467985  409972 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1002 19:51:39.467992  409972 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1002 19:51:39.467998  409972 command_runner.go:130] > StartLimitBurst=3
	I1002 19:51:39.468004  409972 command_runner.go:130] > StartLimitIntervalSec=60
	I1002 19:51:39.468008  409972 command_runner.go:130] > [Service]
	I1002 19:51:39.468015  409972 command_runner.go:130] > Type=notify
	I1002 19:51:39.468019  409972 command_runner.go:130] > Restart=on-failure
	I1002 19:51:39.468024  409972 command_runner.go:130] > Environment=NO_PROXY=192.168.39.83
	I1002 19:51:39.468034  409972 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1002 19:51:39.468042  409972 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1002 19:51:39.468049  409972 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1002 19:51:39.468057  409972 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1002 19:51:39.468065  409972 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1002 19:51:39.468074  409972 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1002 19:51:39.468083  409972 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1002 19:51:39.468092  409972 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1002 19:51:39.468101  409972 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1002 19:51:39.468105  409972 command_runner.go:130] > ExecStart=
	I1002 19:51:39.468122  409972 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1002 19:51:39.468131  409972 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1002 19:51:39.468138  409972 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1002 19:51:39.468146  409972 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1002 19:51:39.468151  409972 command_runner.go:130] > LimitNOFILE=infinity
	I1002 19:51:39.468155  409972 command_runner.go:130] > LimitNPROC=infinity
	I1002 19:51:39.468162  409972 command_runner.go:130] > LimitCORE=infinity
	I1002 19:51:39.468167  409972 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1002 19:51:39.468181  409972 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1002 19:51:39.468185  409972 command_runner.go:130] > TasksMax=infinity
	I1002 19:51:39.468189  409972 command_runner.go:130] > TimeoutStartSec=0
	I1002 19:51:39.468195  409972 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1002 19:51:39.468199  409972 command_runner.go:130] > Delegate=yes
	I1002 19:51:39.468208  409972 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1002 19:51:39.468218  409972 command_runner.go:130] > KillMode=process
	I1002 19:51:39.468224  409972 command_runner.go:130] > [Install]
	I1002 19:51:39.468228  409972 command_runner.go:130] > WantedBy=multi-user.target
	I1002 19:51:39.468519  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:51:39.482685  409972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:51:39.500323  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:51:39.512229  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:51:39.523461  409972 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 19:51:39.550776  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:51:39.563364  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:51:39.579779  409972 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1002 19:51:39.579862  409972 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:51:39.583744  409972 command_runner.go:130] > /usr/bin/cri-dockerd
	I1002 19:51:39.584181  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:51:39.594026  409972 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:51:39.609958  409972 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:51:39.719526  409972 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:51:39.835248  409972 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:51:39.835304  409972 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:51:39.852270  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:51:39.975823  409972 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:51:41.353121  409972 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.377258136s)
	I1002 19:51:41.353211  409972 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:51:41.450319  409972 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:51:41.553865  409972 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:51:41.663182  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:51:41.777963  409972 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:51:41.794348  409972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:51:41.900172  409972 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 19:51:41.987813  409972 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 19:51:41.987896  409972 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 19:51:41.993653  409972 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1002 19:51:41.993678  409972 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 19:51:41.993688  409972 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I1002 19:51:41.993697  409972 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1002 19:51:41.993706  409972 command_runner.go:130] > Access: 2023-10-02 19:51:41.888786164 +0000
	I1002 19:51:41.993715  409972 command_runner.go:130] > Modify: 2023-10-02 19:51:41.888786164 +0000
	I1002 19:51:41.993723  409972 command_runner.go:130] > Change: 2023-10-02 19:51:41.890789187 +0000
	I1002 19:51:41.993730  409972 command_runner.go:130] >  Birth: -
	I1002 19:51:41.993839  409972 start.go:537] Will wait 60s for crictl version
	I1002 19:51:41.993905  409972 ssh_runner.go:195] Run: which crictl
	I1002 19:51:41.998028  409972 command_runner.go:130] > /usr/bin/crictl
	I1002 19:51:41.998094  409972 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 19:51:42.047613  409972 command_runner.go:130] > Version:  0.1.0
	I1002 19:51:42.047641  409972 command_runner.go:130] > RuntimeName:  docker
	I1002 19:51:42.047649  409972 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1002 19:51:42.047658  409972 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 19:51:42.047679  409972 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 19:51:42.047741  409972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:51:42.074566  409972 command_runner.go:130] > 24.0.6
	I1002 19:51:42.075755  409972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:51:42.100307  409972 command_runner.go:130] > 24.0.6
	I1002 19:51:42.103021  409972 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 19:51:42.104435  409972 out.go:177]   - env NO_PROXY=192.168.39.83
	I1002 19:51:42.105708  409972 main.go:141] libmachine: (multinode-058614-m02) Calling .GetIP
	I1002 19:51:42.108623  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:42.109044  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:51:42.109080  409972 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:51:42.109298  409972 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1002 19:51:42.113254  409972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:51:42.126302  409972 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614 for IP: 192.168.39.104
	I1002 19:51:42.126335  409972 certs.go:190] acquiring lock for shared ca certs: {Name:mkd9eff411eb4f3b431b8dec98af3335c0ce4ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:51:42.126501  409972 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key
	I1002 19:51:42.126537  409972 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key
	I1002 19:51:42.126552  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 19:51:42.126565  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 19:51:42.126577  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 19:51:42.126590  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 19:51:42.126639  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem (1338 bytes)
	W1002 19:51:42.126667  409972 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995_empty.pem, impossibly tiny 0 bytes
	I1002 19:51:42.126677  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 19:51:42.126704  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem (1078 bytes)
	I1002 19:51:42.126727  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:51:42.126762  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/home/jenkins/minikube-integration/17323-390762/.minikube/certs/key.pem (1675 bytes)
	I1002 19:51:42.126801  409972 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem (1708 bytes)
	I1002 19:51:42.126829  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:51:42.126842  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem -> /usr/share/ca-certificates/397995.pem
	I1002 19:51:42.126854  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem -> /usr/share/ca-certificates/3979952.pem
	I1002 19:51:42.127188  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:51:42.149112  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:51:42.171098  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:51:42.192485  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 19:51:42.213488  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:51:42.234393  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/certs/397995.pem --> /usr/share/ca-certificates/397995.pem (1338 bytes)
	I1002 19:51:42.255168  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/ssl/certs/3979952.pem --> /usr/share/ca-certificates/3979952.pem (1708 bytes)
	I1002 19:51:42.276968  409972 ssh_runner.go:195] Run: openssl version
	I1002 19:51:42.281922  409972 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1002 19:51:42.282198  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:51:42.292043  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:51:42.296351  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:33 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:51:42.296481  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:33 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:51:42.296531  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:51:42.301781  409972 command_runner.go:130] > b5213941
	I1002 19:51:42.301835  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:51:42.311926  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/397995.pem && ln -fs /usr/share/ca-certificates/397995.pem /etc/ssl/certs/397995.pem"
	I1002 19:51:42.321896  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/397995.pem
	I1002 19:51:42.326039  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 19:38 /usr/share/ca-certificates/397995.pem
	I1002 19:51:42.326155  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 19:38 /usr/share/ca-certificates/397995.pem
	I1002 19:51:42.326195  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/397995.pem
	I1002 19:51:42.331254  409972 command_runner.go:130] > 51391683
	I1002 19:51:42.331312  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/397995.pem /etc/ssl/certs/51391683.0"
	I1002 19:51:42.341579  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3979952.pem && ln -fs /usr/share/ca-certificates/3979952.pem /etc/ssl/certs/3979952.pem"
	I1002 19:51:42.351541  409972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3979952.pem
	I1002 19:51:42.355774  409972 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 19:38 /usr/share/ca-certificates/3979952.pem
	I1002 19:51:42.355792  409972 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 19:38 /usr/share/ca-certificates/3979952.pem
	I1002 19:51:42.355830  409972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3979952.pem
	I1002 19:51:42.361121  409972 command_runner.go:130] > 3ec20f2e
	I1002 19:51:42.361189  409972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3979952.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:51:42.371053  409972 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 19:51:42.374882  409972 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 19:51:42.374923  409972 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 19:51:42.375008  409972 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:51:42.403818  409972 command_runner.go:130] > cgroupfs
	I1002 19:51:42.404683  409972 cni.go:84] Creating CNI manager for ""
	I1002 19:51:42.404699  409972 cni.go:136] 2 nodes found, recommending kindnet
	I1002 19:51:42.404709  409972 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 19:51:42.404736  409972 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-058614 NodeName:multinode-058614-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.83"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:51:42.404878  409972 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-058614-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.83"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:51:42.404951  409972 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-058614-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 19:51:42.405010  409972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 19:51:42.413468  409972 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	I1002 19:51:42.413831  409972 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	
	Initiating transfer...
	I1002 19:51:42.413890  409972 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
	I1002 19:51:42.423048  409972 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
	I1002 19:51:42.423077  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
	I1002 19:51:42.423131  409972 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubelet
	I1002 19:51:42.423150  409972 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
	I1002 19:51:42.423199  409972 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubeadm
	I1002 19:51:42.427451  409972 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1002 19:51:42.427492  409972 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1002 19:51:42.427513  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
	I1002 19:51:43.372634  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1002 19:51:43.372751  409972 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1002 19:51:43.377922  409972 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1002 19:51:43.378513  409972 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1002 19:51:43.378547  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
	I1002 19:51:43.768503  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:51:43.782806  409972 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
	I1002 19:51:43.782921  409972 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
	I1002 19:51:43.787017  409972 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1002 19:51:43.787066  409972 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1002 19:51:43.787099  409972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-390762/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
	I1002 19:51:44.331973  409972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 19:51:44.341035  409972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1002 19:51:44.356556  409972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:51:44.372930  409972 ssh_runner.go:195] Run: grep 192.168.39.83	control-plane.minikube.internal$ /etc/hosts
	I1002 19:51:44.376635  409972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.83	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:51:44.387770  409972 host.go:66] Checking if "multinode-058614" exists ...
	I1002 19:51:44.388081  409972 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:51:44.388110  409972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:51:44.388154  409972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:51:44.402439  409972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I1002 19:51:44.402858  409972 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:51:44.403326  409972 main.go:141] libmachine: Using API Version  1
	I1002 19:51:44.403347  409972 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:51:44.403694  409972 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:51:44.403870  409972 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:51:44.404049  409972 start.go:304] JoinCluster: &{Name:multinode-058614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-058614 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.83 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:51:44.404148  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 19:51:44.404163  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:51:44.407015  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:51:44.407483  409972 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:51:44.407519  409972 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:51:44.407677  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:51:44.407853  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:51:44.407987  409972 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:51:44.408081  409972 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:51:44.582126  409972 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ug1hb1.0jkemqijkdvsk2jl --discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad 
	I1002 19:51:44.582192  409972 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 19:51:44.582234  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ug1hb1.0jkemqijkdvsk2jl --discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-058614-m02"
	I1002 19:51:44.630479  409972 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 19:51:44.800353  409972 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 19:51:44.800382  409972 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 19:51:44.844713  409972 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:51:44.844747  409972 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:51:44.844755  409972 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 19:51:44.967022  409972 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 19:51:46.766252  409972 command_runner.go:130] > This node has joined the cluster:
	I1002 19:51:46.766290  409972 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 19:51:46.766297  409972 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 19:51:46.766304  409972 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 19:51:46.768692  409972 command_runner.go:130] ! W1002 19:51:44.609802    1158 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1002 19:51:46.768725  409972 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:51:46.768758  409972 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ug1hb1.0jkemqijkdvsk2jl --discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-058614-m02": (2.186496409s)
	I1002 19:51:46.768804  409972 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 19:51:47.015868  409972 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1002 19:51:47.015908  409972 start.go:306] JoinCluster complete in 2.611861683s
	I1002 19:51:47.015921  409972 cni.go:84] Creating CNI manager for ""
	I1002 19:51:47.015926  409972 cni.go:136] 2 nodes found, recommending kindnet
	I1002 19:51:47.015980  409972 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 19:51:47.021766  409972 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 19:51:47.021794  409972 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1002 19:51:47.021805  409972 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1002 19:51:47.021816  409972 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 19:51:47.021829  409972 command_runner.go:130] > Access: 2023-10-02 19:50:11.118876540 +0000
	I1002 19:51:47.021841  409972 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1002 19:51:47.021849  409972 command_runner.go:130] > Change: 2023-10-02 19:50:09.361876540 +0000
	I1002 19:51:47.021857  409972 command_runner.go:130] >  Birth: -
	I1002 19:51:47.021909  409972 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 19:51:47.021923  409972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 19:51:47.039963  409972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 19:51:47.370119  409972 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 19:51:47.370157  409972 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 19:51:47.370166  409972 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 19:51:47.370174  409972 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 19:51:47.370703  409972 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:51:47.371044  409972 kapi.go:59] client config for multinode-058614: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key", CAFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:51:47.371525  409972 round_trippers.go:463] GET https://192.168.39.83:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 19:51:47.371544  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:47.371557  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:47.371566  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:47.374549  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:47.374571  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:47.374581  409972 round_trippers.go:580]     Content-Length: 291
	I1002 19:51:47.374589  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:47 GMT
	I1002 19:51:47.374596  409972 round_trippers.go:580]     Audit-Id: 92252faa-2c56-4335-8dce-3951db9c84f0
	I1002 19:51:47.374605  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:47.374613  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:47.374626  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:47.374634  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:47.374662  409972 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"37a26193-5816-4ab7-acfa-78d217f28a0e","resourceVersion":"458","creationTimestamp":"2023-10-02T19:50:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 19:51:47.374783  409972 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-058614" context rescaled to 1 replicas
	I1002 19:51:47.374821  409972 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1002 19:51:47.377238  409972 out.go:177] * Verifying Kubernetes components...
	I1002 19:51:47.378453  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:51:47.393726  409972 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:51:47.393988  409972 kapi.go:59] client config for multinode-058614: &rest.Config{Host:"https://192.168.39.83:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/profiles/multinode-058614/client.key", CAFile:"/home/jenkins/minikube-integration/17323-390762/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:51:47.394310  409972 node_ready.go:35] waiting up to 6m0s for node "multinode-058614-m02" to be "Ready" ...
	I1002 19:51:47.394382  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:47.394389  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:47.394397  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:47.394405  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:47.396842  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:47.396868  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:47.396877  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:47.396882  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:47.396888  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:47.396898  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:47.396904  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:47 GMT
	I1002 19:51:47.396912  409972 round_trippers.go:580]     Audit-Id: b1663cd4-9229-4656-8381-2b107707e833
	I1002 19:51:47.396919  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:47.397011  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:47.397342  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:47.397356  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:47.397368  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:47.397377  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:47.399798  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:47.399816  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:47.399825  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:47.399834  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:47.399844  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:47.399853  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:47 GMT
	I1002 19:51:47.399861  409972 round_trippers.go:580]     Audit-Id: 355855e9-be8a-403b-965f-54ed6681a79f
	I1002 19:51:47.399874  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:47.399879  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:47.399951  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:47.900697  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:47.900723  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:47.900732  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:47.900738  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:47.903641  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:47.903668  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:47.903676  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:47 GMT
	I1002 19:51:47.903681  409972 round_trippers.go:580]     Audit-Id: fa764113-8c85-4eb9-af95-f46286e6849f
	I1002 19:51:47.903686  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:47.903696  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:47.903701  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:47.903706  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:47.903723  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:47.903956  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:48.400716  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:48.400744  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:48.400752  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:48.400758  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:48.403361  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:48.403382  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:48.403388  409972 round_trippers.go:580]     Audit-Id: 0b2490fa-7709-472f-b0f8-623aa54ea846
	I1002 19:51:48.403394  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:48.403401  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:48.403410  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:48.403418  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:48.403427  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:48.403464  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:48 GMT
	I1002 19:51:48.403520  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:48.901185  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:48.901214  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:48.901225  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:48.901232  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:48.904079  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:48.904110  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:48.904119  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:48.904125  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:48.904131  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:48 GMT
	I1002 19:51:48.904136  409972 round_trippers.go:580]     Audit-Id: ae9e1483-fd71-4b1a-ac40-8a37d931be12
	I1002 19:51:48.904141  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:48.904147  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:48.904155  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:48.904250  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:49.401403  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:49.401432  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:49.401444  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:49.401454  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:49.404700  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:49.404736  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:49.404747  409972 round_trippers.go:580]     Audit-Id: 99aaa6ef-6b39-4813-a689-b72d4d9ba6ce
	I1002 19:51:49.404755  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:49.404763  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:49.404770  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:49.404781  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:49.404790  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:49.404798  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:49 GMT
	I1002 19:51:49.404898  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:49.405170  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:51:49.900511  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:49.900537  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:49.900545  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:49.900551  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:49.903375  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:49.903394  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:49.903401  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:49.903407  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:49.903413  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:49 GMT
	I1002 19:51:49.903418  409972 round_trippers.go:580]     Audit-Id: a45b7334-c61c-4e10-9ff6-99a37d9d6039
	I1002 19:51:49.903423  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:49.903428  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:49.903433  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:49.903522  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:50.400697  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:50.400723  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:50.400732  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:50.400737  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:50.403377  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:50.403404  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:50.403415  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:50 GMT
	I1002 19:51:50.403424  409972 round_trippers.go:580]     Audit-Id: 422dd633-327f-4de1-9096-2bdbf64d6241
	I1002 19:51:50.403452  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:50.403466  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:50.403476  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:50.403488  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:50.403503  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:50.403611  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:50.900774  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:50.900804  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:50.900816  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:50.900825  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:50.904227  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:50.904253  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:50.904261  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:50.904267  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:50 GMT
	I1002 19:51:50.904271  409972 round_trippers.go:580]     Audit-Id: be0c5b57-8298-4653-bb82-277bd34c6b62
	I1002 19:51:50.904278  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:50.904286  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:50.904294  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:50.904301  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:50.904391  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:51.400710  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:51.400738  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:51.400746  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:51.400752  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:51.403729  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:51.403762  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:51.403773  409972 round_trippers.go:580]     Audit-Id: c70e983d-7750-484f-8c91-cf8306f652f1
	I1002 19:51:51.403781  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:51.403789  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:51.403797  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:51.403813  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:51.403825  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:51.403834  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:51 GMT
	I1002 19:51:51.403967  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:51.901185  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:51.901213  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:51.901221  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:51.901227  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:51.904353  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:51.904384  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:51.904396  409972 round_trippers.go:580]     Audit-Id: f0f627a8-5580-4a37-85a6-2f0f8952e3e2
	I1002 19:51:51.904406  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:51.904414  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:51.904428  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:51.904436  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:51.904448  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:51.904460  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:51 GMT
	I1002 19:51:51.904522  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:51.904808  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:51:52.400700  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:52.400730  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:52.400745  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:52.400754  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:52.404315  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:52.404338  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:52.404345  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:52.404350  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:52 GMT
	I1002 19:51:52.404355  409972 round_trippers.go:580]     Audit-Id: 6e8a1511-f917-4e6d-abf3-125b7633d9a8
	I1002 19:51:52.404360  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:52.404366  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:52.404371  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:52.404378  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:52.404477  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:52.900689  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:52.900720  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:52.900731  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:52.900740  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:52.904700  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:52.904726  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:52.904737  409972 round_trippers.go:580]     Audit-Id: 07f7ed9b-9f56-4234-9dde-0daa5868fa4f
	I1002 19:51:52.904746  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:52.904756  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:52.904767  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:52.904780  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:52.904796  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:52.904808  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:52 GMT
	I1002 19:51:52.905003  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:53.401288  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:53.401323  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:53.401334  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:53.401342  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:53.404109  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:53.404132  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:53.404143  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:53 GMT
	I1002 19:51:53.404151  409972 round_trippers.go:580]     Audit-Id: a1fe1e91-7260-4d23-9d13-17435687d2a9
	I1002 19:51:53.404159  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:53.404167  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:53.404176  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:53.404186  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:53.404196  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:53.404287  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:53.900522  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:53.900554  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:53.900566  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:53.900576  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:53.904277  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:53.904300  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:53.904308  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:53.904314  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:53.904319  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:53 GMT
	I1002 19:51:53.904331  409972 round_trippers.go:580]     Audit-Id: e5d91e0f-d733-4d32-90a7-561bcb4e5bf8
	I1002 19:51:53.904337  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:53.904342  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:53.904347  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:53.904422  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:54.400589  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:54.400615  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:54.400623  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:54.400630  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:54.403502  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:54.403532  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:54.403542  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:54.403550  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:54.403558  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:54.403566  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:54 GMT
	I1002 19:51:54.403574  409972 round_trippers.go:580]     Audit-Id: 32e7d83c-4008-4196-b55b-406965568237
	I1002 19:51:54.403583  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:54.403596  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:54.403690  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:54.403965  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:51:54.900677  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:54.900705  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:54.900713  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:54.900720  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:54.903673  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:54.903695  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:54.903704  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:54.903712  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:54.903719  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:54.903727  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:54 GMT
	I1002 19:51:54.903734  409972 round_trippers.go:580]     Audit-Id: d224e85e-3d9d-483a-8a95-8241f520cebc
	I1002 19:51:54.903747  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:54.903762  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:54.903840  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:55.400477  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:55.400512  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:55.400523  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:55.400533  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:55.404345  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:55.404372  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:55.404381  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:55 GMT
	I1002 19:51:55.404389  409972 round_trippers.go:580]     Audit-Id: 476647ad-693e-4945-9b73-291169d4d325
	I1002 19:51:55.404396  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:55.404405  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:55.404422  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:55.404430  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:55.404439  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:55.404523  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:55.901318  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:55.901350  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:55.901362  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:55.901372  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:55.904108  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:55.904131  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:55.904142  409972 round_trippers.go:580]     Audit-Id: c7a157b5-45cb-47fc-97cf-4185358cabdc
	I1002 19:51:55.904150  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:55.904157  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:55.904165  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:55.904173  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:55.904181  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:55.904194  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:55 GMT
	I1002 19:51:55.904287  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:56.400820  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:56.400853  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:56.400865  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:56.400877  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:56.403803  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:56.403833  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:56.403843  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:56.403852  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:56 GMT
	I1002 19:51:56.403860  409972 round_trippers.go:580]     Audit-Id: 741589f6-5a70-4692-832a-37ab050718a1
	I1002 19:51:56.403871  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:56.403880  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:56.403891  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:56.403900  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:56.403993  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:56.404323  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:51:56.900963  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:56.900992  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:56.901005  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:56.901014  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:56.904313  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:56.904340  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:56.904352  409972 round_trippers.go:580]     Content-Length: 3897
	I1002 19:51:56.904362  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:56 GMT
	I1002 19:51:56.904370  409972 round_trippers.go:580]     Audit-Id: 246049b2-5c5a-485e-838d-7653974b763a
	I1002 19:51:56.904377  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:56.904383  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:56.904388  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:56.904394  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:56.904472  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"516","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2873 chars]
	I1002 19:51:57.400762  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:57.400786  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:57.400794  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:57.400800  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:57.403662  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:57.403704  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:57.403715  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:57.403724  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:57.403734  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:57 GMT
	I1002 19:51:57.403742  409972 round_trippers.go:580]     Audit-Id: c39bbd48-a86d-48a7-8fcb-2911a36da6c3
	I1002 19:51:57.403750  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:57.403758  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:57.403768  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:57.403886  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:51:57.900386  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:57.900417  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:57.900428  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:57.900436  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:57.903867  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:51:57.903899  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:57.903911  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:57.903919  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:57.903928  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:57.903936  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:57.903944  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:57.903961  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:57 GMT
	I1002 19:51:57.903970  409972 round_trippers.go:580]     Audit-Id: ce28e3e5-8a0b-4af5-97ae-d44442675904
	I1002 19:51:57.904069  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:51:58.401396  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:58.401421  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:58.401430  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:58.401436  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:58.404238  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:58.404260  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:58.404267  409972 round_trippers.go:580]     Audit-Id: 9aab193a-bb26-4549-86fa-06f9c6da95a0
	I1002 19:51:58.404273  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:58.404278  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:58.404283  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:58.404288  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:58.404293  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:58.404298  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:58 GMT
	I1002 19:51:58.404368  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:51:58.404610  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:51:58.900974  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:58.901000  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:58.901009  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:58.901014  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:58.903911  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:58.903931  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:58.903938  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:58.903943  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:58 GMT
	I1002 19:51:58.903948  409972 round_trippers.go:580]     Audit-Id: f53ea64b-86a1-4a70-ab37-97057cb59a9e
	I1002 19:51:58.903953  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:58.903958  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:58.903963  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:58.903969  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:58.904007  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:51:59.401009  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:59.401036  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:59.401045  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:59.401050  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:59.403469  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:59.403493  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:59.403500  409972 round_trippers.go:580]     Audit-Id: 4c178376-ca5c-4514-9e06-47b021252b34
	I1002 19:51:59.403505  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:59.403510  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:59.403516  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:59.403521  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:59.403527  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:59.403533  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:59 GMT
	I1002 19:51:59.403644  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:51:59.900609  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:51:59.900634  409972 round_trippers.go:469] Request Headers:
	I1002 19:51:59.900642  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:51:59.900649  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:51:59.903229  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:51:59.903250  409972 round_trippers.go:577] Response Headers:
	I1002 19:51:59.903259  409972 round_trippers.go:580]     Audit-Id: 712b8ac2-4987-4007-9fbe-497c3eac8f8b
	I1002 19:51:59.903268  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:51:59.903276  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:51:59.903285  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:51:59.903292  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:51:59.903297  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:51:59.903302  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:51:59 GMT
	I1002 19:51:59.903367  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:52:00.400983  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:00.401009  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:00.401017  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:00.401023  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:00.403636  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:00.403661  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:00.403670  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:52:00.403677  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:00 GMT
	I1002 19:52:00.403685  409972 round_trippers.go:580]     Audit-Id: 2e5b6b9f-25a0-448d-91af-15a15f28d0e9
	I1002 19:52:00.403693  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:00.403707  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:00.403720  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:00.403728  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:00.403799  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:52:00.900434  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:00.900462  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:00.900470  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:00.900476  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:00.903942  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:52:00.903965  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:00.903975  409972 round_trippers.go:580]     Audit-Id: 40f2fee5-96b3-4255-bca5-5f3d200d16cd
	I1002 19:52:00.903984  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:00.903991  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:00.903998  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:00.904005  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:00.904012  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:52:00.904019  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:00 GMT
	I1002 19:52:00.904106  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:52:00.904359  409972 node_ready.go:58] node "multinode-058614-m02" has status "Ready":"False"
	I1002 19:52:01.400786  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:01.400818  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:01.400831  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:01.400840  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:01.403659  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:01.403691  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:01.403703  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:01.403712  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:01.403720  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:01.403729  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:52:01.403746  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:01 GMT
	I1002 19:52:01.403753  409972 round_trippers.go:580]     Audit-Id: fcc1b59b-56a5-4a96-98ea-adb22fc77dcf
	I1002 19:52:01.403769  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:01.403874  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:52:01.901407  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:01.901433  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:01.901440  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:01.901446  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:01.904569  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:52:01.904590  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:01.904597  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:01 GMT
	I1002 19:52:01.904602  409972 round_trippers.go:580]     Audit-Id: 288b4be6-2752-4176-ae75-5a6b0cd94402
	I1002 19:52:01.904607  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:01.904612  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:01.904617  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:01.904622  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:01.904627  409972 round_trippers.go:580]     Content-Length: 3863
	I1002 19:52:01.904692  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"541","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 2839 chars]
	I1002 19:52:02.401387  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:02.401410  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.401420  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.401426  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.404267  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.404285  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.404292  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.404297  409972 round_trippers.go:580]     Content-Length: 3729
	I1002 19:52:02.404302  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.404308  409972 round_trippers.go:580]     Audit-Id: 92fce821-1435-4509-abc8-b071da701f64
	I1002 19:52:02.404313  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.404318  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.404325  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.404451  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"558","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2705 chars]
	I1002 19:52:02.404706  409972 node_ready.go:49] node "multinode-058614-m02" has status "Ready":"True"
	I1002 19:52:02.404719  409972 node_ready.go:38] duration metric: took 15.010389736s waiting for node "multinode-058614-m02" to be "Ready" ...
	I1002 19:52:02.404729  409972 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 19:52:02.404797  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods
	I1002 19:52:02.404805  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.404813  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.404821  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.413069  409972 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1002 19:52:02.413092  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.413102  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.413110  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.413118  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.413126  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.413137  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.413145  409972 round_trippers.go:580]     Audit-Id: eb048a4d-d119-4fa4-add7-cdd6ac5046d5
	I1002 19:52:02.414585  409972 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"558"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"454","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67687 chars]
	I1002 19:52:02.416579  409972 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.416690  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ssbfx
	I1002 19:52:02.416701  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.416708  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.416717  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.418660  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:52:02.418677  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.418687  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.418698  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.418706  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.418712  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.418720  409972 round_trippers.go:580]     Audit-Id: f003d26a-b241-4e3e-835c-9507b9376f85
	I1002 19:52:02.418725  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.418869  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ssbfx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f646d313-d0bd-4b09-9968-ff0d119dfae3","resourceVersion":"454","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"54300122-8957-4e7c-9f9d-579c1d9eda57","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"54300122-8957-4e7c-9f9d-579c1d9eda57\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I1002 19:52:02.419270  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.419281  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.419288  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.419293  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.421388  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.421404  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.421410  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.421415  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.421422  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.421435  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.421447  409972 round_trippers.go:580]     Audit-Id: 6b634d25-8bfb-4749-be41-ae862767a109
	I1002 19:52:02.421458  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.421564  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:02.421815  409972 pod_ready.go:92] pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:02.421828  409972 pod_ready.go:81] duration metric: took 5.231963ms waiting for pod "coredns-5dd5756b68-ssbfx" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.421836  409972 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.421883  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-058614
	I1002 19:52:02.421890  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.421897  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.421904  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.424866  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.424882  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.424888  409972 round_trippers.go:580]     Audit-Id: f205d68b-ae21-48bc-8c81-3a5b8107fadb
	I1002 19:52:02.424893  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.424898  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.424903  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.424908  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.424913  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.425037  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-058614","namespace":"kube-system","uid":"a28dcb7b-9677-46e1-bef8-a7fa010f156b","resourceVersion":"425","creationTimestamp":"2023-10-02T19:50:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.83:2379","kubernetes.io/config.hash":"106f287475a5843afbb16e738a4dd1f4","kubernetes.io/config.mirror":"106f287475a5843afbb16e738a4dd1f4","kubernetes.io/config.seen":"2023-10-02T19:50:36.758483921Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I1002 19:52:02.425349  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.425359  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.425365  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.425371  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.426900  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:52:02.426917  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.426924  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.426930  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.426935  409972 round_trippers.go:580]     Audit-Id: ab18d9d7-2d84-4f2a-b134-463f27787bc1
	I1002 19:52:02.426940  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.426945  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.426949  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.427070  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:02.427368  409972 pod_ready.go:92] pod "etcd-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:02.427378  409972 pod_ready.go:81] duration metric: took 5.536933ms waiting for pod "etcd-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.427392  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.427432  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-058614
	I1002 19:52:02.427446  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.427453  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.427459  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.430199  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.430215  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.430222  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.430227  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.430240  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.430245  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.430255  409972 round_trippers.go:580]     Audit-Id: 80c5355c-22ea-4f6d-a262-d09c02e5e1e2
	I1002 19:52:02.430263  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.430407  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-058614","namespace":"kube-system","uid":"f0e4433a-e791-480f-8306-ecbdc6d3706f","resourceVersion":"422","creationTimestamp":"2023-10-02T19:50:45Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.83:8443","kubernetes.io/config.hash":"ad6af5517be9484355d3192cf7264036","kubernetes.io/config.mirror":"ad6af5517be9484355d3192cf7264036","kubernetes.io/config.seen":"2023-10-02T19:50:44.955487926Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I1002 19:52:02.430720  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.430729  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.430736  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.430741  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.436179  409972 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 19:52:02.436193  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.436199  409972 round_trippers.go:580]     Audit-Id: 7f8f088c-a041-4bd7-a4bc-b99606219e5a
	I1002 19:52:02.436205  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.436210  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.436221  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.436238  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.436251  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.437139  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:02.437405  409972 pod_ready.go:92] pod "kube-apiserver-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:02.437418  409972 pod_ready.go:81] duration metric: took 10.020365ms waiting for pod "kube-apiserver-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.437429  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.437482  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-058614
	I1002 19:52:02.437489  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.437496  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.437503  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.439380  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:52:02.439394  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.439403  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.439411  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.439419  409972 round_trippers.go:580]     Audit-Id: 0e385547-cc9b-4a50-8ed5-6ac6defce9ca
	I1002 19:52:02.439427  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.439450  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.439461  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.440068  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-058614","namespace":"kube-system","uid":"5ed0ef01-4ceb-4702-9e56-ea5bd25d377d","resourceVersion":"423","creationTimestamp":"2023-10-02T19:50:45Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a6f177c5a53135e8813857ecd09e4546","kubernetes.io/config.mirror":"a6f177c5a53135e8813857ecd09e4546","kubernetes.io/config.seen":"2023-10-02T19:50:44.955482294Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I1002 19:52:02.440381  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.440391  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.440398  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.440404  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.441995  409972 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 19:52:02.442012  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.442020  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.442029  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.442036  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.442043  409972 round_trippers.go:580]     Audit-Id: 39b8df48-8d66-43ca-abf9-d87f49cb3e8a
	I1002 19:52:02.442053  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.442062  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.442219  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:02.442488  409972 pod_ready.go:92] pod "kube-controller-manager-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:02.442501  409972 pod_ready.go:81] duration metric: took 5.062636ms waiting for pod "kube-controller-manager-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.442509  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8r7q6" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.601912  409972 request.go:629] Waited for 159.342028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8r7q6
	I1002 19:52:02.601986  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8r7q6
	I1002 19:52:02.601994  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.602002  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.602010  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.604716  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.604736  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.604743  409972 round_trippers.go:580]     Audit-Id: 5e0db747-19ab-4618-973d-e07220cc3b6d
	I1002 19:52:02.604749  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.604754  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.604759  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.604770  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.604778  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.604953  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8r7q6","generateName":"kube-proxy-","namespace":"kube-system","uid":"075b91f3-9483-4bb8-9afd-dec07038f014","resourceVersion":"418","creationTimestamp":"2023-10-02T19:50:57Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7554e598-7b2a-499d-95f7-df0eaaed9e8a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7554e598-7b2a-499d-95f7-df0eaaed9e8a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I1002 19:52:02.801791  409972 request.go:629] Waited for 196.420007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.801860  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:02.801865  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:02.801872  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:02.801879  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:02.804512  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:02.804528  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:02.804535  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:02.804541  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:02.804552  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:02.804563  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:02 GMT
	I1002 19:52:02.804571  409972 round_trippers.go:580]     Audit-Id: f53cf857-2eac-4361-8348-d01488992e97
	I1002 19:52:02.804582  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:02.804876  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:02.805223  409972 pod_ready.go:92] pod "kube-proxy-8r7q6" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:02.805238  409972 pod_ready.go:81] duration metric: took 362.723629ms waiting for pod "kube-proxy-8r7q6" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:02.805250  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btqkr" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:03.002371  409972 request.go:629] Waited for 197.030736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-btqkr
	I1002 19:52:03.002449  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-proxy-btqkr
	I1002 19:52:03.002455  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:03.002463  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:03.002469  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:03.006128  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:52:03.006148  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:03.006155  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:03.006161  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:03.006166  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:03.006171  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:03 GMT
	I1002 19:52:03.006176  409972 round_trippers.go:580]     Audit-Id: 0d67024f-2e31-41f9-afbb-42c98a73f18e
	I1002 19:52:03.006181  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:03.007094  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-btqkr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f69e9f38-f591-4bde-a50f-008e41e4735c","resourceVersion":"535","creationTimestamp":"2023-10-02T19:51:46Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7554e598-7b2a-499d-95f7-df0eaaed9e8a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7554e598-7b2a-499d-95f7-df0eaaed9e8a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1002 19:52:03.201957  409972 request.go:629] Waited for 194.370536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:03.202023  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614-m02
	I1002 19:52:03.202028  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:03.202035  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:03.202041  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:03.204694  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:03.205121  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:03.205137  409972 round_trippers.go:580]     Audit-Id: 7c1e3dca-f494-430f-9fc2-4052b1bedf18
	I1002 19:52:03.205148  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:03.205161  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:03.205171  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:03.205179  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:03.205185  409972 round_trippers.go:580]     Content-Length: 3729
	I1002 19:52:03.205190  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:03 GMT
	I1002 19:52:03.205301  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614-m02","uid":"b9c938d3-9e0a-4a8c-904a-c84a6d6dfeca","resourceVersion":"558","creationTimestamp":"2023-10-02T19:51:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:51:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2705 chars]
	I1002 19:52:03.205648  409972 pod_ready.go:92] pod "kube-proxy-btqkr" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:03.205674  409972 pod_ready.go:81] duration metric: took 400.416484ms waiting for pod "kube-proxy-btqkr" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:03.205684  409972 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:03.401628  409972 request.go:629] Waited for 195.861012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-058614
	I1002 19:52:03.401740  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-058614
	I1002 19:52:03.401749  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:03.401758  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:03.401771  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:03.405187  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:52:03.405209  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:03.405222  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:03 GMT
	I1002 19:52:03.405228  409972 round_trippers.go:580]     Audit-Id: 7e93b551-b3eb-43fa-bd7c-eb1d78170226
	I1002 19:52:03.405233  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:03.405238  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:03.405243  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:03.405248  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:03.405570  409972 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-058614","namespace":"kube-system","uid":"f18491bf-ec7a-41bc-b666-1553594afa9a","resourceVersion":"424","creationTimestamp":"2023-10-02T19:50:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68d95ca928ce5c338c3970f4212341e1","kubernetes.io/config.mirror":"68d95ca928ce5c338c3970f4212341e1","kubernetes.io/config.seen":"2023-10-02T19:50:36.758482865Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T19:50:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I1002 19:52:03.602349  409972 request.go:629] Waited for 196.379692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:03.602428  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes/multinode-058614
	I1002 19:52:03.602433  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:03.602441  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:03.602447  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:03.606130  409972 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 19:52:03.606148  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:03.606155  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:03.606160  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:03.606166  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:03.606171  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:03 GMT
	I1002 19:52:03.606176  409972 round_trippers.go:580]     Audit-Id: 2a92112c-fa07-4483-90d7-44871a6314a0
	I1002 19:52:03.606182  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:03.606854  409972 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-10-02T19:50:41Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I1002 19:52:03.607148  409972 pod_ready.go:92] pod "kube-scheduler-multinode-058614" in "kube-system" namespace has status "Ready":"True"
	I1002 19:52:03.607161  409972 pod_ready.go:81] duration metric: took 401.470565ms waiting for pod "kube-scheduler-multinode-058614" in "kube-system" namespace to be "Ready" ...
	I1002 19:52:03.607171  409972 pod_ready.go:38] duration metric: took 1.202431071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 19:52:03.607192  409972 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:52:03.607246  409972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:52:03.623684  409972 system_svc.go:56] duration metric: took 16.481274ms WaitForService to wait for kubelet.
	I1002 19:52:03.623715  409972 kubeadm.go:581] duration metric: took 16.248861751s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 19:52:03.623744  409972 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:52:03.802200  409972 request.go:629] Waited for 178.3574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.83:8443/api/v1/nodes
	I1002 19:52:03.802294  409972 round_trippers.go:463] GET https://192.168.39.83:8443/api/v1/nodes
	I1002 19:52:03.802301  409972 round_trippers.go:469] Request Headers:
	I1002 19:52:03.802313  409972 round_trippers.go:473]     Accept: application/json, */*
	I1002 19:52:03.802325  409972 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1002 19:52:03.805029  409972 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 19:52:03.805049  409972 round_trippers.go:577] Response Headers:
	I1002 19:52:03.805056  409972 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 93988ef0-fb12-48ed-a808-ba8890f5a66f
	I1002 19:52:03.805062  409972 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5afab6d5-7b2f-48f8-b3e1-f8f73ed79fae
	I1002 19:52:03.805067  409972 round_trippers.go:580]     Date: Mon, 02 Oct 2023 19:52:03 GMT
	I1002 19:52:03.805072  409972 round_trippers.go:580]     Audit-Id: ed1ef39b-3533-4404-bf8a-24ee7eb3fa13
	I1002 19:52:03.805077  409972 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 19:52:03.805082  409972 round_trippers.go:580]     Content-Type: application/json
	I1002 19:52:03.805514  409972 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"561"},"items":[{"metadata":{"name":"multinode-058614","uid":"be14fac1-2d56-421b-8b19-577e228626db","resourceVersion":"463","creationTimestamp":"2023-10-02T19:50:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-058614","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-058614","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T19_50_46_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8708 chars]
	I1002 19:52:03.805953  409972 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 19:52:03.805974  409972 node_conditions.go:123] node cpu capacity is 2
	I1002 19:52:03.805986  409972 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 19:52:03.805993  409972 node_conditions.go:123] node cpu capacity is 2
	I1002 19:52:03.806007  409972 node_conditions.go:105] duration metric: took 182.256768ms to run NodePressure ...
	I1002 19:52:03.806019  409972 start.go:228] waiting for startup goroutines ...
	I1002 19:52:03.806046  409972 start.go:242] writing updated cluster config ...
	I1002 19:52:03.806358  409972 ssh_runner.go:195] Run: rm -f paused
	I1002 19:52:03.855913  409972 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 19:52:03.858779  409972 out.go:177] * Done! kubectl is now configured to use "multinode-058614" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 19:50:10 UTC, ends at Mon 2023-10-02 19:53:28 UTC. --
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.268149114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.268170239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.268179656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.392491186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.392542628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.392555531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:51:11 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:11.392565821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:51:56 multinode-058614 dockerd[1121]: time="2023-10-02T19:51:56.741834227Z" level=info msg="ignoring event" container=ba14f983de42a8924d0bc3bda336eabd251efc8bf1928c719dec100cd5805a71 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:51:56 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:56.743071130Z" level=info msg="shim disconnected" id=ba14f983de42a8924d0bc3bda336eabd251efc8bf1928c719dec100cd5805a71 namespace=moby
	Oct 02 19:51:56 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:56.743173023Z" level=warning msg="cleaning up after shim disconnected" id=ba14f983de42a8924d0bc3bda336eabd251efc8bf1928c719dec100cd5805a71 namespace=moby
	Oct 02 19:51:56 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:56.743183867Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 19:51:57 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:57.846294699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:51:57 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:57.846630405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:51:57 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:57.846739456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:51:57 multinode-058614 dockerd[1127]: time="2023-10-02T19:51:57.846772455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:52:04 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:04.976974132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:52:04 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:04.977053280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:52:04 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:04.977066241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:52:04 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:04.977076869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:52:05 multinode-058614 cri-dockerd[1010]: time="2023-10-02T19:52:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/431a75036bf697b9b7e8e42021529d8d0284e00d775a3f5872fb2c78e7936a17/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 02 19:52:06 multinode-058614 cri-dockerd[1010]: time="2023-10-02T19:52:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 02 19:52:06 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:06.687837656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:52:06 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:06.688009793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:52:06 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:06.688037764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:52:06 multinode-058614 dockerd[1127]: time="2023-10-02T19:52:06.688495964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	011e2a3b5cae7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   431a75036bf69       busybox-5bc68d56bd-kvr6v
	5e57e70a4e467       c7d1297425461                                                                                         About a minute ago   Running             kindnet-cni               1                   e4f500b1eba3c       kindnet-h5ml2
	8b5f5434f4369       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   428c52bb9d8af       storage-provisioner
	6ee09e8fff4c1       ead0a4a53df89                                                                                         2 minutes ago        Running             coredns                   0                   f5534aeb124e5       coredns-5dd5756b68-ssbfx
	ba14f983de42a       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              2 minutes ago        Exited              kindnet-cni               0                   e4f500b1eba3c       kindnet-h5ml2
	305c9676de7fa       c120fed2beb84                                                                                         2 minutes ago        Running             kube-proxy                0                   d86951427738c       kube-proxy-8r7q6
	260873b19bfcf       7a5d9d67a13f6                                                                                         2 minutes ago        Running             kube-scheduler            0                   c6b6a974db186       kube-scheduler-multinode-058614
	7bbd211191980       73deb9a3f7025                                                                                         2 minutes ago        Running             etcd                      0                   ccb4815a65602       etcd-multinode-058614
	e08535277a858       55f13c92defb1                                                                                         2 minutes ago        Running             kube-controller-manager   0                   7b64a109c3e51       kube-controller-manager-multinode-058614
	036c4b8dda697       cdcab12b2dd16                                                                                         2 minutes ago        Running             kube-apiserver            0                   42b4975451aab       kube-apiserver-multinode-058614
	
	* 
	* ==> coredns [6ee09e8fff4c] <==
	* [INFO] 10.244.1.2:53130 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224097s
	[INFO] 10.244.0.3:35939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069289s
	[INFO] 10.244.0.3:51214 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001675763s
	[INFO] 10.244.0.3:46923 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000141703s
	[INFO] 10.244.0.3:37153 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116247s
	[INFO] 10.244.0.3:51323 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012109s
	[INFO] 10.244.0.3:55993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007365s
	[INFO] 10.244.0.3:44122 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007328s
	[INFO] 10.244.0.3:46064 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062041s
	[INFO] 10.244.1.2:50245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150461s
	[INFO] 10.244.1.2:54743 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000320212s
	[INFO] 10.244.1.2:43958 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171327s
	[INFO] 10.244.1.2:37479 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105122s
	[INFO] 10.244.0.3:37232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016198s
	[INFO] 10.244.0.3:36111 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063614s
	[INFO] 10.244.0.3:37401 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050312s
	[INFO] 10.244.0.3:59847 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090601s
	[INFO] 10.244.1.2:50746 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124594s
	[INFO] 10.244.1.2:39512 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169388s
	[INFO] 10.244.1.2:41863 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00011345s
	[INFO] 10.244.1.2:56350 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111644s
	[INFO] 10.244.0.3:54360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070399s
	[INFO] 10.244.0.3:39051 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000036223s
	[INFO] 10.244.0.3:59871 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000035085s
	[INFO] 10.244.0.3:39369 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000030819s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-058614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-058614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=multinode-058614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T19_50_46_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 19:50:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-058614
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 19:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:50:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:50:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:50:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:51:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    multinode-058614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dcb58a0b48e4a1dabbc0255d9c537a6
	  System UUID:                5dcb58a0-b48e-4a1d-abbc-0255d9c537a6
	  Boot ID:                    405a4b37-7cea-41d7-930a-129bb98d4b6f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-kvr6v                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 coredns-5dd5756b68-ssbfx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m31s
	  kube-system                 etcd-multinode-058614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kindnet-h5ml2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m31s
	  kube-system                 kube-apiserver-multinode-058614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-controller-manager-multinode-058614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-proxy-8r7q6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-multinode-058614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node multinode-058614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node multinode-058614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node multinode-058614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s                  kubelet          Node multinode-058614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s                  kubelet          Node multinode-058614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s                  kubelet          Node multinode-058614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m32s                  node-controller  Node multinode-058614 event: Registered Node multinode-058614 in Controller
	  Normal  NodeReady                2m18s                  kubelet          Node multinode-058614 status is now: NodeReady
	
	
	Name:               multinode-058614-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-058614-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 19:51:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-058614-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 19:53:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:51:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:51:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:51:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 19:52:17 +0000   Mon, 02 Oct 2023 19:52:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    multinode-058614-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c1e2787c659409bb947faf11856f887
	  System UUID:                1c1e2787-c659-409b-b947-faf11856f887
	  Boot ID:                    1ded651c-83c3-429a-afdb-c5a97f0b03fc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-dxdvv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kindnet-bj7lz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      102s
	  kube-system                 kube-proxy-btqkr            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x2 over 103s)  kubelet          Node multinode-058614-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x2 over 103s)  kubelet          Node multinode-058614-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x2 over 103s)  kubelet          Node multinode-058614-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           102s                 node-controller  Node multinode-058614-m02 event: Registered Node multinode-058614-m02 in Controller
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                  kubelet          Node multinode-058614-m02 status is now: NodeReady
	
	
	Name:               multinode-058614-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-058614-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 19:52:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-058614-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 19:53:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 19:52:53 +0000   Mon, 02 Oct 2023 19:52:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 19:52:53 +0000   Mon, 02 Oct 2023 19:52:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 19:52:53 +0000   Mon, 02 Oct 2023 19:52:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 19:52:53 +0000   Mon, 02 Oct 2023 19:52:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    multinode-058614-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0fcd2f879c44a66b8e8616d7305c62f
	  System UUID:                a0fcd2f8-79c4-4a66-b8e8-616d7305c62f
	  Boot ID:                    ac34271c-5ed5-43f8-be6b-450399ac0297
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5cjhj       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      47s
	  kube-system                 kube-proxy-gk9jz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x2 over 47s)  kubelet          Node multinode-058614-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x2 over 47s)  kubelet          Node multinode-058614-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x2 over 47s)  kubelet          Node multinode-058614-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                node-controller  Node multinode-058614-m03 event: Registered Node multinode-058614-m03 in Controller
	  Normal  NodeReady                35s                kubelet          Node multinode-058614-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.071155] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.308103] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.157420] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146190] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.065901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.471398] systemd-fstab-generator[547]: Ignoring "noauto" for root device
	[  +0.096263] systemd-fstab-generator[558]: Ignoring "noauto" for root device
	[  +1.089419] systemd-fstab-generator[734]: Ignoring "noauto" for root device
	[  +0.284025] systemd-fstab-generator[773]: Ignoring "noauto" for root device
	[  +0.110063] systemd-fstab-generator[784]: Ignoring "noauto" for root device
	[  +0.123005] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +1.490603] systemd-fstab-generator[955]: Ignoring "noauto" for root device
	[  +0.112298] systemd-fstab-generator[966]: Ignoring "noauto" for root device
	[  +0.098431] systemd-fstab-generator[977]: Ignoring "noauto" for root device
	[  +0.112676] systemd-fstab-generator[988]: Ignoring "noauto" for root device
	[  +0.113357] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +4.483614] systemd-fstab-generator[1112]: Ignoring "noauto" for root device
	[  +2.499463] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.322885] systemd-fstab-generator[1495]: Ignoring "noauto" for root device
	[  +8.266009] systemd-fstab-generator[2431]: Ignoring "noauto" for root device
	[ +13.731279] kauditd_printk_skb: 39 callbacks suppressed
	[Oct 2 19:51] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [7bbd21119198] <==
	* WARNING: 2023/10/02 19:51:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2023/10/02 19:51:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2023/10/02 19:51:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-10-02T19:51:46.674346Z","caller":"traceutil/trace.go:171","msg":"trace[1777453625] transaction","detail":"{read_only:false; response_revision:502; number_of_response:1; }","duration":"209.453523ms","start":"2023-10-02T19:51:46.464875Z","end":"2023-10-02T19:51:46.674329Z","steps":["trace[1777453625] 'process raft request'  (duration: 209.372876ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T19:51:46.674976Z","caller":"traceutil/trace.go:171","msg":"trace[462150793] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"344.282073ms","start":"2023-10-02T19:51:46.330678Z","end":"2023-10-02T19:51:46.67496Z","steps":["trace[462150793] 'process raft request'  (duration: 329.268469ms)","trace[462150793] 'compare'  (duration: 13.763137ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-02T19:51:46.675023Z","caller":"traceutil/trace.go:171","msg":"trace[1633014102] transaction","detail":"{read_only:false; number_of_response:1; response_revision:501; }","duration":"210.252788ms","start":"2023-10-02T19:51:46.464765Z","end":"2023-10-02T19:51:46.675018Z","steps":["trace[1633014102] 'process raft request'  (duration: 209.458606ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:51:46.675691Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T19:51:46.330653Z","time spent":"344.400317ms","remote":"127.0.0.1:44904","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-058614-m02.178a625037928ad9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-058614-m02.178a625037928ad9\" value_size:629 lease:3237677953866480349 >> failure:<>"}
	{"level":"info","ts":"2023-10-02T19:51:46.676081Z","caller":"traceutil/trace.go:171","msg":"trace[1088825201] transaction","detail":"{read_only:false; number_of_response:1; response_revision:499; }","duration":"331.516574ms","start":"2023-10-02T19:51:46.344551Z","end":"2023-10-02T19:51:46.676068Z","steps":["trace[1088825201] 'process raft request'  (duration: 329.507951ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:51:46.676201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T19:51:46.344537Z","time spent":"331.611685ms","remote":"127.0.0.1:44926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2083,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-058614-m02\" mod_revision:494 > success:<request_put:<key:\"/registry/minions/multinode-058614-m02\" value_size:1964 >> failure:<request_range:<key:\"/registry/minions/multinode-058614-m02\" > >"}
	{"level":"info","ts":"2023-10-02T19:51:46.676346Z","caller":"traceutil/trace.go:171","msg":"trace[1543374559] linearizableReadLoop","detail":"{readStateIndex:527; appliedIndex:525; }","duration":"216.938544ms","start":"2023-10-02T19:51:46.459398Z","end":"2023-10-02T19:51:46.676337Z","steps":["trace[1543374559] 'read index received'  (duration: 200.557348ms)","trace[1543374559] 'applied index is now lower than readState.Index'  (duration: 16.379978ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-02T19:51:46.675002Z","caller":"traceutil/trace.go:171","msg":"trace[680530908] transaction","detail":"{read_only:false; response_revision:501; number_of_response:1; }","duration":"210.263905ms","start":"2023-10-02T19:51:46.46473Z","end":"2023-10-02T19:51:46.674994Z","steps":["trace[680530908] 'process raft request'  (duration: 209.456875ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:51:46.680183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"455.00785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:2 size:6877"}
	{"level":"info","ts":"2023-10-02T19:51:46.680336Z","caller":"traceutil/trace.go:171","msg":"trace[218013011] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:2; response_revision:502; }","duration":"455.16143ms","start":"2023-10-02T19:51:46.22516Z","end":"2023-10-02T19:51:46.680322Z","steps":["trace[218013011] 'agreement among raft nodes before linearized reading'  (duration: 454.965307ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:51:46.68098Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-02T19:51:46.225143Z","time spent":"455.561132ms","remote":"127.0.0.1:44926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":2,"response size":6899,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2023-10-02T19:51:46.680567Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.812014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-058614-m02\" ","response":"range_response_count:1 size:2370"}
	{"level":"info","ts":"2023-10-02T19:51:46.681787Z","caller":"traceutil/trace.go:171","msg":"trace[534285007] range","detail":"{range_begin:/registry/minions/multinode-058614-m02; range_end:; response_count:1; response_revision:502; }","duration":"211.031585ms","start":"2023-10-02T19:51:46.470743Z","end":"2023-10-02T19:51:46.681775Z","steps":["trace[534285007] 'agreement among raft nodes before linearized reading'  (duration: 209.787491ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:51:46.685052Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.542832ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-058614-m02\" ","response":"range_response_count:1 size:2370"}
	{"level":"info","ts":"2023-10-02T19:51:46.685138Z","caller":"traceutil/trace.go:171","msg":"trace[602171582] range","detail":"{range_begin:/registry/minions/multinode-058614-m02; range_end:; response_count:1; response_revision:502; }","duration":"139.622626ms","start":"2023-10-02T19:51:46.545495Z","end":"2023-10-02T19:51:46.685118Z","steps":["trace[602171582] 'agreement among raft nodes before linearized reading'  (duration: 135.12122ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-02T19:52:41.507532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.030266ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12461049990721256648 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-058614-m03.178a625cfff5c1f6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-058614-m03.178a625cfff5c1f6\" value_size:642 lease:3237677953866480349 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-10-02T19:52:41.50808Z","caller":"traceutil/trace.go:171","msg":"trace[1497016173] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"264.902945ms","start":"2023-10-02T19:52:41.243138Z","end":"2023-10-02T19:52:41.508041Z","steps":["trace[1497016173] 'process raft request'  (duration: 73.178409ms)","trace[1497016173] 'compare'  (duration: 190.937894ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-02T19:52:41.508547Z","caller":"traceutil/trace.go:171","msg":"trace[1461747623] linearizableReadLoop","detail":"{readStateIndex:680; appliedIndex:679; }","duration":"232.927546ms","start":"2023-10-02T19:52:41.275613Z","end":"2023-10-02T19:52:41.50854Z","steps":["trace[1461747623] 'read index received'  (duration: 40.710958ms)","trace[1461747623] 'applied index is now lower than readState.Index'  (duration: 192.215761ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-02T19:52:41.508979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.366342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-058614-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-02T19:52:41.509029Z","caller":"traceutil/trace.go:171","msg":"trace[1703488420] range","detail":"{range_begin:/registry/csinodes/multinode-058614-m03; range_end:; response_count:0; response_revision:639; }","duration":"233.434858ms","start":"2023-10-02T19:52:41.275588Z","end":"2023-10-02T19:52:41.509022Z","steps":["trace[1703488420] 'agreement among raft nodes before linearized reading'  (duration: 233.178537ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T19:52:41.508402Z","caller":"traceutil/trace.go:171","msg":"trace[1948399951] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"212.185542ms","start":"2023-10-02T19:52:41.296205Z","end":"2023-10-02T19:52:41.50839Z","steps":["trace[1948399951] 'process raft request'  (duration: 211.898999ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T19:52:43.597432Z","caller":"traceutil/trace.go:171","msg":"trace[1052090623] transaction","detail":"{read_only:false; response_revision:664; number_of_response:1; }","duration":"124.272755ms","start":"2023-10-02T19:52:43.473143Z","end":"2023-10-02T19:52:43.597416Z","steps":["trace[1052090623] 'process raft request'  (duration: 124.139309ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:53:28 up 3 min,  0 users,  load average: 0.52, 0.44, 0.19
	Linux multinode-058614 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [5e57e70a4e46] <==
	* I1002 19:52:48.948431       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:52:48.948601       1 main.go:227] handling current node
	I1002 19:52:48.948623       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:52:48.948630       1 main.go:250] Node multinode-058614-m02 has CIDR [10.244.1.0/24] 
	I1002 19:52:48.949655       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I1002 19:52:48.950069       1 main.go:250] Node multinode-058614-m03 has CIDR [10.244.2.0/24] 
	I1002 19:52:48.950766       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.119 Flags: [] Table: 0} 
	I1002 19:52:58.964690       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:52:58.964765       1 main.go:227] handling current node
	I1002 19:52:58.964788       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:52:58.964796       1 main.go:250] Node multinode-058614-m02 has CIDR [10.244.1.0/24] 
	I1002 19:52:58.965335       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I1002 19:52:58.965380       1 main.go:250] Node multinode-058614-m03 has CIDR [10.244.2.0/24] 
	I1002 19:53:08.978725       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:53:08.979301       1 main.go:227] handling current node
	I1002 19:53:08.979459       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:53:08.979732       1 main.go:250] Node multinode-058614-m02 has CIDR [10.244.1.0/24] 
	I1002 19:53:08.980061       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I1002 19:53:08.980194       1 main.go:250] Node multinode-058614-m03 has CIDR [10.244.2.0/24] 
	I1002 19:53:18.994402       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:53:18.994463       1 main.go:227] handling current node
	I1002 19:53:18.994475       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:53:18.994481       1 main.go:250] Node multinode-058614-m02 has CIDR [10.244.1.0/24] 
	I1002 19:53:18.994663       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I1002 19:53:18.994697       1 main.go:250] Node multinode-058614-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kindnet [ba14f983de42] <==
	* I1002 19:51:46.711131       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:51:46.711146       1 main.go:227] handling current node
	I1002 19:51:46.711156       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:51:46.711160       1 main.go:250] Node multinode-058614-m02 has CIDR [] 
	I1002 19:51:46.711170       1 main.go:204] Failed to reconcile routes, retrying after error: invalid CIDR address: 
	I1002 19:51:47.711337       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:51:47.711407       1 main.go:227] handling current node
	I1002 19:51:47.711428       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:51:47.711434       1 main.go:250] Node multinode-058614-m02 has CIDR [] 
	I1002 19:51:47.711447       1 main.go:204] Failed to reconcile routes, retrying after error: invalid CIDR address: 
	I1002 19:51:49.712228       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:51:49.712308       1 main.go:227] handling current node
	I1002 19:51:49.712330       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:51:49.712340       1 main.go:250] Node multinode-058614-m02 has CIDR [] 
	I1002 19:51:49.712357       1 main.go:204] Failed to reconcile routes, retrying after error: invalid CIDR address: 
	I1002 19:51:52.713769       1 main.go:223] Handling node with IPs: map[192.168.39.83:{}]
	I1002 19:51:52.713801       1 main.go:227] handling current node
	I1002 19:51:52.713816       1 main.go:223] Handling node with IPs: map[192.168.39.104:{}]
	I1002 19:51:52.713821       1 main.go:250] Node multinode-058614-m02 has CIDR [] 
	I1002 19:51:52.713833       1 main.go:204] Failed to reconcile routes, retrying after error: invalid CIDR address: 
	panic: Maximum retries reconciling node routes: invalid CIDR address: 
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:208 +0xd07
	
	* 
	* ==> kube-apiserver [036c4b8dda69] <==
	* E1002 19:51:46.538291       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 121.016µs, panicked: false, err: context canceled, panic-reason: <nil>
	E1002 19:51:46.538550       1 wrap.go:54] timeout or abort while handling: method=PATCH URI="/api/v1/namespaces/default/events/multinode-058614-m02.178a62500dfcfe9c" audit-ID="0795cafb-8b09-422c-bbf4-bb1e35c9270f"
	E1002 19:51:46.538723       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E1002 19:51:46.538841       1 timeout.go:142] post-timeout activity - time-elapsed: 4.428µs, PATCH "/api/v1/namespaces/default/events/multinode-058614-m02.178a62500dfcfe9c" result: <nil>
	I1002 19:51:46.541648       1 trace.go:236] Trace[666025387]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0cd1ae30-3677-4ed1-a8f3-e7075f8fd66c,client:192.168.39.104,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-058614-m02,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:GET (02-Oct-2023 19:51:46.037) (total time: 504ms):
	Trace[666025387]: [504.017408ms] [504.017408ms] END
	E1002 19:51:46.541817       1 timeout.go:142] post-timeout activity - time-elapsed: 5.611921ms, GET "/api/v1/nodes/multinode-058614-m02" result: <nil>
	E1002 19:51:46.545057       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E1002 19:51:46.550622       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I1002 19:51:46.551777       1 trace.go:236] Trace[1967735236]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:90e88c8b-1580-49b5-94bd-b8463f46e9cf,client:192.168.39.104,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-058614-m02/status,user-agent:kubelet/v1.28.2 (linux/amd64) kubernetes/89a4ea3,verb:PATCH (02-Oct-2023 19:51:46.045) (total time: 505ms):
	Trace[1967735236]: ---"About to apply patch" 413ms (19:51:46.460)
	Trace[1967735236]: [505.758177ms] [505.758177ms] END
	E1002 19:51:46.551946       1 timeout.go:142] post-timeout activity - time-elapsed: 14.097592ms, PATCH "/api/v1/nodes/multinode-058614-m02/status" result: <nil>
	I1002 19:51:46.683032       1 trace.go:236] Trace[1021678789]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:b2518388-539b-4bcb-9161-8b03429372bc,client:192.168.39.83,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.2 (linux/amd64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (02-Oct-2023 19:51:46.037) (total time: 645ms):
	Trace[1021678789]: [645.651743ms] [645.651743ms] END
	I1002 19:51:46.691521       1 trace.go:236] Trace[128494824]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:dbf18f16-d2b6-4a29-a8e0-6315dd2f669f,client:192.168.39.83,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kube-controller-manager/v1.28.2 (linux/amd64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:POST (02-Oct-2023 19:51:46.030) (total time: 660ms):
	Trace[128494824]: [660.821018ms] [660.821018ms] END
	I1002 19:51:46.708507       1 trace.go:236] Trace[609926042]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:1053a65e-e3f4-4e9f-81c2-3f5605fd0d71,client:192.168.39.83,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-058614-m02,user-agent:kube-controller-manager/v1.28.2 (linux/amd64) kubernetes/89a4ea3/system:serviceaccount:kube-system:node-controller,verb:PATCH (02-Oct-2023 19:51:46.035) (total time: 673ms):
	Trace[609926042]: ["GuaranteedUpdate etcd3" audit-id:1053a65e-e3f4-4e9f-81c2-3f5605fd0d71,key:/minions/multinode-058614-m02,type:*core.Node,resource:nodes 673ms (19:51:46.035)
	Trace[609926042]:  ---"Txn call completed" 424ms (19:51:46.461)
	Trace[609926042]:  ---"Txn call completed" 229ms (19:51:46.692)]
	Trace[609926042]: ---"About to apply patch" 424ms (19:51:46.461)
	Trace[609926042]: ---"About to apply patch" 230ms (19:51:46.692)
	Trace[609926042]: ---"Object stored in database" 14ms (19:51:46.708)
	Trace[609926042]: [673.347546ms] [673.347546ms] END
	
	* 
	* ==> kube-controller-manager [e08535277a85] <==
	* I1002 19:51:46.328252       1 event.go:307] "Event occurred" object="multinode-058614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-058614-m02 event: Registered Node multinode-058614-m02 in Controller"
	I1002 19:51:46.687148       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-btqkr"
	I1002 19:51:46.705515       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bj7lz"
	I1002 19:51:46.719753       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-058614-m02" podCIDRs=["10.244.1.0/24"]
	I1002 19:52:02.020167       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-058614-m02"
	I1002 19:52:04.499396       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1002 19:52:04.521800       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-dxdvv"
	I1002 19:52:04.536302       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-kvr6v"
	I1002 19:52:04.552973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.288929ms"
	I1002 19:52:04.576755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.251566ms"
	I1002 19:52:04.577614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="494.383µs"
	I1002 19:52:04.598797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="153.748µs"
	I1002 19:52:06.340603       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-dxdvv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-dxdvv"
	I1002 19:52:06.766842       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.196598ms"
	I1002 19:52:06.767183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="184.407µs"
	I1002 19:52:06.848474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.108262ms"
	I1002 19:52:06.848737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.617µs"
	I1002 19:52:41.516430       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-058614-m03\" does not exist"
	I1002 19:52:41.516819       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-058614-m02"
	I1002 19:52:41.540771       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5cjhj"
	I1002 19:52:41.552553       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-058614-m03" podCIDRs=["10.244.2.0/24"]
	I1002 19:52:41.557188       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gk9jz"
	I1002 19:52:46.348480       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-058614-m03"
	I1002 19:52:46.348598       1 event.go:307] "Event occurred" object="multinode-058614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-058614-m03 event: Registered Node multinode-058614-m03 in Controller"
	I1002 19:52:53.837375       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-058614-m02"
	
	* 
	* ==> kube-proxy [305c9676de7f] <==
	* I1002 19:50:58.179391       1 server_others.go:69] "Using iptables proxy"
	I1002 19:50:58.204815       1 node.go:141] Successfully retrieved node IP: 192.168.39.83
	I1002 19:50:58.290769       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1002 19:50:58.290816       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1002 19:50:58.295686       1 server_others.go:152] "Using iptables Proxier"
	I1002 19:50:58.296284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 19:50:58.296596       1 server.go:846] "Version info" version="v1.28.2"
	I1002 19:50:58.296636       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 19:50:58.298533       1 config.go:188] "Starting service config controller"
	I1002 19:50:58.299082       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 19:50:58.299136       1 config.go:315] "Starting node config controller"
	I1002 19:50:58.299142       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 19:50:58.302340       1 config.go:97] "Starting endpoint slice config controller"
	I1002 19:50:58.302354       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 19:50:58.400092       1 shared_informer.go:318] Caches are synced for node config
	I1002 19:50:58.400187       1 shared_informer.go:318] Caches are synced for service config
	I1002 19:50:58.402463       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [260873b19bfc] <==
	* W1002 19:50:41.469739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 19:50:41.470018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 19:50:41.470483       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 19:50:41.470623       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 19:50:41.472710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 19:50:41.472867       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 19:50:42.295841       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 19:50:42.295868       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 19:50:42.343445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 19:50:42.343500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 19:50:42.388671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 19:50:42.388699       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1002 19:50:42.410252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 19:50:42.410304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 19:50:42.443160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 19:50:42.443427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 19:50:42.450829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 19:50:42.451035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 19:50:42.613262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 19:50:42.613285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 19:50:42.714602       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 19:50:42.714746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 19:50:42.959136       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 19:50:42.959359       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 19:50:45.051460       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 19:50:10 UTC, ends at Mon 2023-10-02 19:53:28 UTC. --
	Oct 02 19:51:01 multinode-058614 kubelet[2450]: I1002 19:51:01.530700    2450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f500b1eba3c60f74e6f45c1e3ed8914ca6f3da42ca1d6a8fff490ca121768c"
	Oct 02 19:51:01 multinode-058614 kubelet[2450]: I1002 19:51:01.562854    2450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8r7q6" podStartSLOduration=4.562757385 podCreationTimestamp="2023-10-02 19:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:51:01.562585626 +0000 UTC m=+16.821765105" watchObservedRunningTime="2023-10-02 19:51:01.562757385 +0000 UTC m=+16.821936866"
	Oct 02 19:51:05 multinode-058614 kubelet[2450]: I1002 19:51:05.597837    2450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-h5ml2" podStartSLOduration=4.949578555 podCreationTimestamp="2023-10-02 19:50:57 +0000 UTC" firstStartedPulling="2023-10-02 19:51:01.533018389 +0000 UTC m=+16.792197850" lastFinishedPulling="2023-10-02 19:51:05.181191608 +0000 UTC m=+20.440371079" observedRunningTime="2023-10-02 19:51:05.597747507 +0000 UTC m=+20.856926986" watchObservedRunningTime="2023-10-02 19:51:05.597751784 +0000 UTC m=+20.856931263"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.160644    2450 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.200334    2450 topology_manager.go:215] "Topology Admit Handler" podUID="f646d313-d0bd-4b09-9968-ff0d119dfae3" podNamespace="kube-system" podName="coredns-5dd5756b68-ssbfx"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.213367    2450 topology_manager.go:215] "Topology Admit Handler" podUID="6107368d-ae74-461e-a41c-fd7cefe35161" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.282016    2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crx5\" (UniqueName: \"kubernetes.io/projected/f646d313-d0bd-4b09-9968-ff0d119dfae3-kube-api-access-9crx5\") pod \"coredns-5dd5756b68-ssbfx\" (UID: \"f646d313-d0bd-4b09-9968-ff0d119dfae3\") " pod="kube-system/coredns-5dd5756b68-ssbfx"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.282276    2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnc4\" (UniqueName: \"kubernetes.io/projected/6107368d-ae74-461e-a41c-fd7cefe35161-kube-api-access-6xnc4\") pod \"storage-provisioner\" (UID: \"6107368d-ae74-461e-a41c-fd7cefe35161\") " pod="kube-system/storage-provisioner"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.282450    2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f646d313-d0bd-4b09-9968-ff0d119dfae3-config-volume\") pod \"coredns-5dd5756b68-ssbfx\" (UID: \"f646d313-d0bd-4b09-9968-ff0d119dfae3\") " pod="kube-system/coredns-5dd5756b68-ssbfx"
	Oct 02 19:51:10 multinode-058614 kubelet[2450]: I1002 19:51:10.282485    2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6107368d-ae74-461e-a41c-fd7cefe35161-tmp\") pod \"storage-provisioner\" (UID: \"6107368d-ae74-461e-a41c-fd7cefe35161\") " pod="kube-system/storage-provisioner"
	Oct 02 19:51:11 multinode-058614 kubelet[2450]: I1002 19:51:11.231503    2450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="428c52bb9d8af8c315b3d1c79ca6d686f437895d1f09e81ba81ce0e1d62dacc2"
	Oct 02 19:51:11 multinode-058614 kubelet[2450]: I1002 19:51:11.363233    2450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5534aeb124e53f70caa01ccc4bd30abc85932bf14694605e4b60d16a2c9bc97"
	Oct 02 19:51:12 multinode-058614 kubelet[2450]: I1002 19:51:12.411213    2450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.41110264 podCreationTimestamp="2023-10-02 19:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:51:12.391049797 +0000 UTC m=+27.650229276" watchObservedRunningTime="2023-10-02 19:51:12.41110264 +0000 UTC m=+27.670282112"
	Oct 02 19:51:12 multinode-058614 kubelet[2450]: I1002 19:51:12.436528    2450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ssbfx" podStartSLOduration=15.436495888 podCreationTimestamp="2023-10-02 19:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:51:12.41477871 +0000 UTC m=+27.673958190" watchObservedRunningTime="2023-10-02 19:51:12.436495888 +0000 UTC m=+27.695675370"
	Oct 02 19:51:45 multinode-058614 kubelet[2450]: E1002 19:51:45.114620    2450 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 19:51:45 multinode-058614 kubelet[2450]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 19:51:45 multinode-058614 kubelet[2450]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 19:51:45 multinode-058614 kubelet[2450]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 02 19:51:57 multinode-058614 kubelet[2450]: I1002 19:51:57.731823    2450 scope.go:117] "RemoveContainer" containerID="ba14f983de42a8924d0bc3bda336eabd251efc8bf1928c719dec100cd5805a71"
	Oct 02 19:52:04 multinode-058614 kubelet[2450]: I1002 19:52:04.555855    2450 topology_manager.go:215] "Topology Admit Handler" podUID="1759ab93-ad9a-4b81-aa77-e4345e3c9e23" podNamespace="default" podName="busybox-5bc68d56bd-kvr6v"
	Oct 02 19:52:04 multinode-058614 kubelet[2450]: I1002 19:52:04.630066    2450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9h6\" (UniqueName: \"kubernetes.io/projected/1759ab93-ad9a-4b81-aa77-e4345e3c9e23-kube-api-access-dp9h6\") pod \"busybox-5bc68d56bd-kvr6v\" (UID: \"1759ab93-ad9a-4b81-aa77-e4345e3c9e23\") " pod="default/busybox-5bc68d56bd-kvr6v"
	Oct 02 19:52:45 multinode-058614 kubelet[2450]: E1002 19:52:45.113206    2450 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 02 19:52:45 multinode-058614 kubelet[2450]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 02 19:52:45 multinode-058614 kubelet[2450]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 02 19:52:45 multinode-058614 kubelet[2450]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-058614 -n multinode-058614
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-058614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (21.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-864077 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-864077 "sudo crictl images -o json": exit status 1 (238.077548ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-864077 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-864077 -n old-k8s-version-864077
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-864077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-864077 logs -n 25: (1.154042671s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-807615                 | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-807615                                  | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:25 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p gvisor-297880                                       | gvisor-297880                | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:19 UTC |
	| delete  | -p                                                     | disable-driver-mounts-673689 | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:19 UTC |
	|         | disable-driver-mounts-673689                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-063235 | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:21 UTC |
	|         | default-k8s-diff-port-063235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-016464                  | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:19 UTC | 02 Oct 23 20:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-063235  | default-k8s-diff-port-063235 | jenkins | v1.31.2 | 02 Oct 23 20:21 UTC | 02 Oct 23 20:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-063235 | jenkins | v1.31.2 | 02 Oct 23 20:21 UTC | 02 Oct 23 20:21 UTC |
	|         | default-k8s-diff-port-063235                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-063235       | default-k8s-diff-port-063235 | jenkins | v1.31.2 | 02 Oct 23 20:21 UTC | 02 Oct 23 20:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-063235 | jenkins | v1.31.2 | 02 Oct 23 20:21 UTC |                     |
	|         | default-k8s-diff-port-063235                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-807615 sudo                             | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC | 02 Oct 23 20:25 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-807615                                  | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC | 02 Oct 23 20:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-807615                                  | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC | 02 Oct 23 20:25 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-807615                                  | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC | 02 Oct 23 20:25 UTC |
	| delete  | -p embed-certs-807615                                  | embed-certs-807615           | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC | 02 Oct 23 20:25 UTC |
	| start   | -p newest-cni-418729 --memory=2200 --alsologtostderr   | newest-cni-418729            | jenkins | v1.31.2 | 02 Oct 23 20:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.2            |                              |         |         |                     |                     |
	| ssh     | -p no-preload-016464 sudo                              | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC | 02 Oct 23 20:26 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC | 02 Oct 23 20:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC | 02 Oct 23 20:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC | 02 Oct 23 20:26 UTC |
	| delete  | -p no-preload-016464                                   | no-preload-016464            | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC | 02 Oct 23 20:26 UTC |
	| start   | -p auto-950653 --memory=3072                           | auto-950653                  | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-864077 sudo                         | old-k8s-version-864077       | jenkins | v1.31.2 | 02 Oct 23 20:26 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 20:26:30
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:26:30.958010  430895 out.go:296] Setting OutFile to fd 1 ...
	I1002 20:26:30.958108  430895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 20:26:30.958115  430895 out.go:309] Setting ErrFile to fd 2...
	I1002 20:26:30.958120  430895 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 20:26:30.958320  430895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 20:26:30.958914  430895 out.go:303] Setting JSON to false
	I1002 20:26:30.959992  430895 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11342,"bootTime":1696267049,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:26:30.960054  430895 start.go:138] virtualization: kvm guest
	I1002 20:26:30.962630  430895 out.go:177] * [auto-950653] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 20:26:30.964447  430895 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 20:26:30.964518  430895 notify.go:220] Checking for updates...
	I1002 20:26:30.965902  430895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:26:30.967557  430895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 20:26:30.968918  430895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 20:26:30.970283  430895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:26:30.971634  430895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:26:30.973549  430895 config.go:182] Loaded profile config "default-k8s-diff-port-063235": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 20:26:30.973722  430895 config.go:182] Loaded profile config "newest-cni-418729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 20:26:30.973872  430895 config.go:182] Loaded profile config "old-k8s-version-864077": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 20:26:30.973976  430895 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 20:26:31.013702  430895 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 20:26:31.015472  430895 start.go:298] selected driver: kvm2
	I1002 20:26:31.015493  430895 start.go:902] validating driver "kvm2" against <nil>
	I1002 20:26:31.015506  430895 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:26:31.016290  430895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:26:31.016376  430895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17323-390762/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 20:26:31.031577  430895 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 20:26:31.031624  430895 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 20:26:31.031810  430895 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:26:31.031866  430895 cni.go:84] Creating CNI manager for ""
	I1002 20:26:31.031894  430895 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:26:31.031905  430895 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 20:26:31.031916  430895 start_flags.go:321] config:
	{Name:auto-950653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-950653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 20:26:31.032095  430895 iso.go:125] acquiring lock: {Name:mkbfe48e1980de2c6c14998e378eaaa3f660e151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:26:31.033937  430895 out.go:177] * Starting control plane node auto-950653 in cluster auto-950653
	I1002 20:26:31.035337  430895 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 20:26:31.035368  430895 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 20:26:31.035384  430895 cache.go:57] Caching tarball of preloaded images
	I1002 20:26:31.035497  430895 preload.go:174] Found /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 20:26:31.035510  430895 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 20:26:31.035609  430895 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/auto-950653/config.json ...
	I1002 20:26:31.035632  430895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/auto-950653/config.json: {Name:mke30781302973f09fa167a20bd4fab39c8f521a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:26:31.035778  430895 start.go:365] acquiring machines lock for auto-950653: {Name:mk4eec10b828b68be104dfa4b7220ed2aea8b62b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 20:26:31.035814  430895 start.go:369] acquired machines lock for "auto-950653" in 22.386µs
	I1002 20:26:31.035838  430895 start.go:93] Provisioning new machine with config: &{Name:auto-950653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.2 ClusterName:auto-950653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:26:31.035931  430895 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 20:26:29.712515  428967 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hwvvs" in "kube-system" namespace has status "Ready":"False"
	I1002 20:26:32.204704  428967 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hwvvs" in "kube-system" namespace has status "Ready":"False"
	I1002 20:26:30.010809  427143 system_pods.go:86] 6 kube-system pods found
	I1002 20:26:30.010848  427143 system_pods.go:89] "coredns-5644d7b6d9-kgdjv" [ffba83fc-5f46-461e-8450-7b5ff343f11b] Running
	I1002 20:26:30.010857  427143 system_pods.go:89] "etcd-old-k8s-version-864077" [d8455a26-c195-4cee-91c5-76a042739321] Running
	I1002 20:26:30.010865  427143 system_pods.go:89] "kube-apiserver-old-k8s-version-864077" [0816935d-ff0b-44d6-b233-6afc85707d81] Running
	I1002 20:26:30.010872  427143 system_pods.go:89] "kube-proxy-tlnwd" [9474acee-85c7-45dd-9570-892cf2a8c1f9] Running
	I1002 20:26:30.010883  427143 system_pods.go:89] "metrics-server-74d5856cc6-hhzk5" [47535270-e92b-4a63-85f2-69f442965bf9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:26:30.010897  427143 system_pods.go:89] "storage-provisioner" [26e4eed0-6c83-4806-8a2d-9c0dddaca4fd] Running
	I1002 20:26:30.010924  427143 retry.go:31] will retry after 14.01316203s: missing components: kube-controller-manager, kube-scheduler
	I1002 20:26:30.942760  430238 out.go:204]   - Booting up control plane ...
	I1002 20:26:30.942988  430238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:30.943106  430238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:30.945787  430238 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:30.964782  430238 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:30.966018  430238 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:30.966323  430238 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 20:26:31.114664  430238 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 20:26:31.037885  430895 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 20:26:31.038018  430895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:31.038062  430895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:31.053036  430895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I1002 20:26:31.053494  430895 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:31.054074  430895 main.go:141] libmachine: Using API Version  1
	I1002 20:26:31.054098  430895 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:31.054474  430895 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:31.054767  430895 main.go:141] libmachine: (auto-950653) Calling .GetMachineName
	I1002 20:26:31.054936  430895 main.go:141] libmachine: (auto-950653) Calling .DriverName
	I1002 20:26:31.055133  430895 start.go:159] libmachine.API.Create for "auto-950653" (driver="kvm2")
	I1002 20:26:31.055162  430895 client.go:168] LocalClient.Create starting
	I1002 20:26:31.055197  430895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/ca.pem
	I1002 20:26:31.055235  430895 main.go:141] libmachine: Decoding PEM data...
	I1002 20:26:31.055257  430895 main.go:141] libmachine: Parsing certificate...
	I1002 20:26:31.055335  430895 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-390762/.minikube/certs/cert.pem
	I1002 20:26:31.055363  430895 main.go:141] libmachine: Decoding PEM data...
	I1002 20:26:31.055383  430895 main.go:141] libmachine: Parsing certificate...
	I1002 20:26:31.055409  430895 main.go:141] libmachine: Running pre-create checks...
	I1002 20:26:31.055426  430895 main.go:141] libmachine: (auto-950653) Calling .PreCreateCheck
	I1002 20:26:31.055780  430895 main.go:141] libmachine: (auto-950653) Calling .GetConfigRaw
	I1002 20:26:31.056227  430895 main.go:141] libmachine: Creating machine...
	I1002 20:26:31.056241  430895 main.go:141] libmachine: (auto-950653) Calling .Create
	I1002 20:26:31.056410  430895 main.go:141] libmachine: (auto-950653) Creating KVM machine...
	I1002 20:26:31.057797  430895 main.go:141] libmachine: (auto-950653) DBG | found existing default KVM network
	I1002 20:26:31.059482  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:31.059304  430918 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015980}
	I1002 20:26:31.064841  430895 main.go:141] libmachine: (auto-950653) DBG | trying to create private KVM network mk-auto-950653 192.168.39.0/24...
	I1002 20:26:31.142745  430895 main.go:141] libmachine: (auto-950653) DBG | private KVM network mk-auto-950653 192.168.39.0/24 created
	I1002 20:26:31.142784  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:31.142725  430918 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 20:26:31.142802  430895 main.go:141] libmachine: (auto-950653) Setting up store path in /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653 ...
	I1002 20:26:31.142821  430895 main.go:141] libmachine: (auto-950653) Building disk image from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 20:26:31.142917  430895 main.go:141] libmachine: (auto-950653) Downloading /home/jenkins/minikube-integration/17323-390762/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 20:26:31.384718  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:31.384583  430918 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653/id_rsa...
	I1002 20:26:31.684877  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:31.684749  430918 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653/auto-950653.rawdisk...
	I1002 20:26:31.684908  430895 main.go:141] libmachine: (auto-950653) DBG | Writing magic tar header
	I1002 20:26:31.684921  430895 main.go:141] libmachine: (auto-950653) DBG | Writing SSH key tar header
	I1002 20:26:31.684933  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:31.684866  430918 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653 ...
	I1002 20:26:31.685016  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653
	I1002 20:26:31.685074  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube/machines
	I1002 20:26:31.685110  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653 (perms=drwx------)
	I1002 20:26:31.685118  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 20:26:31.685132  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17323-390762
	I1002 20:26:31.685143  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1002 20:26:31.685157  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home/jenkins
	I1002 20:26:31.685169  430895 main.go:141] libmachine: (auto-950653) DBG | Checking permissions on dir: /home
	I1002 20:26:31.685178  430895 main.go:141] libmachine: (auto-950653) DBG | Skipping /home - not owner
	I1002 20:26:31.685188  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube/machines (perms=drwxr-xr-x)
	I1002 20:26:31.685208  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762/.minikube (perms=drwxr-xr-x)
	I1002 20:26:31.685222  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins/minikube-integration/17323-390762 (perms=drwxrwxr-x)
	I1002 20:26:31.685233  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1002 20:26:31.685252  430895 main.go:141] libmachine: (auto-950653) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1002 20:26:31.685265  430895 main.go:141] libmachine: (auto-950653) Creating domain...
	I1002 20:26:31.686534  430895 main.go:141] libmachine: (auto-950653) define libvirt domain using xml: 
	I1002 20:26:31.686554  430895 main.go:141] libmachine: (auto-950653) <domain type='kvm'>
	I1002 20:26:31.686565  430895 main.go:141] libmachine: (auto-950653)   <name>auto-950653</name>
	I1002 20:26:31.686576  430895 main.go:141] libmachine: (auto-950653)   <memory unit='MiB'>3072</memory>
	I1002 20:26:31.686589  430895 main.go:141] libmachine: (auto-950653)   <vcpu>2</vcpu>
	I1002 20:26:31.686598  430895 main.go:141] libmachine: (auto-950653)   <features>
	I1002 20:26:31.686610  430895 main.go:141] libmachine: (auto-950653)     <acpi/>
	I1002 20:26:31.686619  430895 main.go:141] libmachine: (auto-950653)     <apic/>
	I1002 20:26:31.686631  430895 main.go:141] libmachine: (auto-950653)     <pae/>
	I1002 20:26:31.686642  430895 main.go:141] libmachine: (auto-950653)     
	I1002 20:26:31.686655  430895 main.go:141] libmachine: (auto-950653)   </features>
	I1002 20:26:31.686677  430895 main.go:141] libmachine: (auto-950653)   <cpu mode='host-passthrough'>
	I1002 20:26:31.686688  430895 main.go:141] libmachine: (auto-950653)   
	I1002 20:26:31.686708  430895 main.go:141] libmachine: (auto-950653)   </cpu>
	I1002 20:26:31.686721  430895 main.go:141] libmachine: (auto-950653)   <os>
	I1002 20:26:31.686734  430895 main.go:141] libmachine: (auto-950653)     <type>hvm</type>
	I1002 20:26:31.686747  430895 main.go:141] libmachine: (auto-950653)     <boot dev='cdrom'/>
	I1002 20:26:31.686759  430895 main.go:141] libmachine: (auto-950653)     <boot dev='hd'/>
	I1002 20:26:31.686808  430895 main.go:141] libmachine: (auto-950653)     <bootmenu enable='no'/>
	I1002 20:26:31.686838  430895 main.go:141] libmachine: (auto-950653)   </os>
	I1002 20:26:31.686848  430895 main.go:141] libmachine: (auto-950653)   <devices>
	I1002 20:26:31.686861  430895 main.go:141] libmachine: (auto-950653)     <disk type='file' device='cdrom'>
	I1002 20:26:31.686892  430895 main.go:141] libmachine: (auto-950653)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653/boot2docker.iso'/>
	I1002 20:26:31.686915  430895 main.go:141] libmachine: (auto-950653)       <target dev='hdc' bus='scsi'/>
	I1002 20:26:31.686936  430895 main.go:141] libmachine: (auto-950653)       <readonly/>
	I1002 20:26:31.686947  430895 main.go:141] libmachine: (auto-950653)     </disk>
	I1002 20:26:31.686963  430895 main.go:141] libmachine: (auto-950653)     <disk type='file' device='disk'>
	I1002 20:26:31.686978  430895 main.go:141] libmachine: (auto-950653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1002 20:26:31.686995  430895 main.go:141] libmachine: (auto-950653)       <source file='/home/jenkins/minikube-integration/17323-390762/.minikube/machines/auto-950653/auto-950653.rawdisk'/>
	I1002 20:26:31.687008  430895 main.go:141] libmachine: (auto-950653)       <target dev='hda' bus='virtio'/>
	I1002 20:26:31.687015  430895 main.go:141] libmachine: (auto-950653)     </disk>
	I1002 20:26:31.687023  430895 main.go:141] libmachine: (auto-950653)     <interface type='network'>
	I1002 20:26:31.687033  430895 main.go:141] libmachine: (auto-950653)       <source network='mk-auto-950653'/>
	I1002 20:26:31.687043  430895 main.go:141] libmachine: (auto-950653)       <model type='virtio'/>
	I1002 20:26:31.687053  430895 main.go:141] libmachine: (auto-950653)     </interface>
	I1002 20:26:31.687061  430895 main.go:141] libmachine: (auto-950653)     <interface type='network'>
	I1002 20:26:31.687076  430895 main.go:141] libmachine: (auto-950653)       <source network='default'/>
	I1002 20:26:31.687088  430895 main.go:141] libmachine: (auto-950653)       <model type='virtio'/>
	I1002 20:26:31.687097  430895 main.go:141] libmachine: (auto-950653)     </interface>
	I1002 20:26:31.687105  430895 main.go:141] libmachine: (auto-950653)     <serial type='pty'>
	I1002 20:26:31.687118  430895 main.go:141] libmachine: (auto-950653)       <target port='0'/>
	I1002 20:26:31.687138  430895 main.go:141] libmachine: (auto-950653)     </serial>
	I1002 20:26:31.687161  430895 main.go:141] libmachine: (auto-950653)     <console type='pty'>
	I1002 20:26:31.687180  430895 main.go:141] libmachine: (auto-950653)       <target type='serial' port='0'/>
	I1002 20:26:31.687194  430895 main.go:141] libmachine: (auto-950653)     </console>
	I1002 20:26:31.687206  430895 main.go:141] libmachine: (auto-950653)     <rng model='virtio'>
	I1002 20:26:31.687219  430895 main.go:141] libmachine: (auto-950653)       <backend model='random'>/dev/random</backend>
	I1002 20:26:31.687231  430895 main.go:141] libmachine: (auto-950653)     </rng>
	I1002 20:26:31.687239  430895 main.go:141] libmachine: (auto-950653)     
	I1002 20:26:31.687250  430895 main.go:141] libmachine: (auto-950653)     
	I1002 20:26:31.687260  430895 main.go:141] libmachine: (auto-950653)   </devices>
	I1002 20:26:31.687276  430895 main.go:141] libmachine: (auto-950653) </domain>
	I1002 20:26:31.687292  430895 main.go:141] libmachine: (auto-950653) 
	I1002 20:26:31.691627  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:35:e1:65 in network default
	I1002 20:26:31.692289  430895 main.go:141] libmachine: (auto-950653) Ensuring networks are active...
	I1002 20:26:31.692305  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:31.693104  430895 main.go:141] libmachine: (auto-950653) Ensuring network default is active
	I1002 20:26:31.693628  430895 main.go:141] libmachine: (auto-950653) Ensuring network mk-auto-950653 is active
	I1002 20:26:31.694157  430895 main.go:141] libmachine: (auto-950653) Getting domain xml...
	I1002 20:26:31.695011  430895 main.go:141] libmachine: (auto-950653) Creating domain...
	I1002 20:26:33.085962  430895 main.go:141] libmachine: (auto-950653) Waiting to get IP...
	I1002 20:26:33.086882  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:33.087361  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:33.087404  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:33.087324  430918 retry.go:31] will retry after 235.982863ms: waiting for machine to come up
	I1002 20:26:33.325040  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:33.325668  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:33.325693  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:33.325615  430918 retry.go:31] will retry after 269.149076ms: waiting for machine to come up
	I1002 20:26:33.596314  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:33.596942  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:33.596970  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:33.596896  430918 retry.go:31] will retry after 299.613916ms: waiting for machine to come up
	I1002 20:26:33.898654  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:33.899297  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:33.899338  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:33.899236  430918 retry.go:31] will retry after 425.986682ms: waiting for machine to come up
	I1002 20:26:34.327158  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:34.327736  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:34.327784  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:34.327675  430918 retry.go:31] will retry after 744.727725ms: waiting for machine to come up
	I1002 20:26:35.073568  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:35.074126  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:35.074155  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:35.074092  430918 retry.go:31] will retry after 931.002913ms: waiting for machine to come up
	I1002 20:26:34.205548  428967 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hwvvs" in "kube-system" namespace has status "Ready":"False"
	I1002 20:26:36.206866  428967 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hwvvs" in "kube-system" namespace has status "Ready":"False"
	I1002 20:26:38.617873  430238 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504297 seconds
	I1002 20:26:38.618063  430238 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 20:26:38.639765  430238 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 20:26:39.182466  430238 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 20:26:39.183033  430238 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-418729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 20:26:39.702198  430238 kubeadm.go:322] [bootstrap-token] Using token: vmiuoe.ssa0o6c3arofnrm2
	I1002 20:26:39.703574  430238 out.go:204]   - Configuring RBAC rules ...
	I1002 20:26:39.703702  430238 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 20:26:39.711610  430238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 20:26:39.743297  430238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 20:26:39.762041  430238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 20:26:39.770114  430238 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 20:26:39.774349  430238 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 20:26:39.796665  430238 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 20:26:40.098653  430238 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 20:26:40.158211  430238 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 20:26:40.160381  430238 kubeadm.go:322] 
	I1002 20:26:40.160484  430238 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 20:26:40.160496  430238 kubeadm.go:322] 
	I1002 20:26:40.160594  430238 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 20:26:40.160606  430238 kubeadm.go:322] 
	I1002 20:26:40.160637  430238 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 20:26:40.160710  430238 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 20:26:40.160780  430238 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 20:26:40.160788  430238 kubeadm.go:322] 
	I1002 20:26:40.160857  430238 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 20:26:40.160864  430238 kubeadm.go:322] 
	I1002 20:26:40.160931  430238 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 20:26:40.160938  430238 kubeadm.go:322] 
	I1002 20:26:40.160999  430238 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 20:26:40.161095  430238 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 20:26:40.161177  430238 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 20:26:40.161184  430238 kubeadm.go:322] 
	I1002 20:26:40.161355  430238 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 20:26:40.161443  430238 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 20:26:40.161450  430238 kubeadm.go:322] 
	I1002 20:26:40.161552  430238 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vmiuoe.ssa0o6c3arofnrm2 \
	I1002 20:26:40.161676  430238 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad \
	I1002 20:26:40.161705  430238 kubeadm.go:322] 	--control-plane 
	I1002 20:26:40.161712  430238 kubeadm.go:322] 
	I1002 20:26:40.161832  430238 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 20:26:40.161838  430238 kubeadm.go:322] 
	I1002 20:26:40.161945  430238 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vmiuoe.ssa0o6c3arofnrm2 \
	I1002 20:26:40.162068  430238 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:34e35905a788df884ba37f75e8ba6d269171b9f9a012b72423ad6eee1d6bffad 
	I1002 20:26:40.165937  430238 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:26:40.165966  430238 cni.go:84] Creating CNI manager for ""
	I1002 20:26:40.165984  430238 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 20:26:40.167697  430238 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 20:26:40.168884  430238 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 20:26:40.185192  430238 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 20:26:40.203225  430238 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 20:26:40.203300  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:40.203336  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86 minikube.k8s.io/name=newest-cni-418729 minikube.k8s.io/updated_at=2023_10_02T20_26_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:36.006600  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:36.007134  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:36.007174  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:36.007096  430918 retry.go:31] will retry after 895.774253ms: waiting for machine to come up
	I1002 20:26:36.905333  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:36.905957  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:36.905986  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:36.905904  430918 retry.go:31] will retry after 1.14209337s: waiting for machine to come up
	I1002 20:26:38.049463  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:38.050079  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:38.050111  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:38.050037  430918 retry.go:31] will retry after 1.716823152s: waiting for machine to come up
	I1002 20:26:39.768830  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:39.769555  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:39.769750  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:39.769696  430918 retry.go:31] will retry after 1.670892651s: waiting for machine to come up
	I1002 20:26:38.187953  428967 pod_ready.go:81] duration metric: took 4m0.000535488s waiting for pod "metrics-server-57f55c9bc5-hwvvs" in "kube-system" namespace to be "Ready" ...
	E1002 20:26:38.188010  428967 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1002 20:26:38.188036  428967 pod_ready.go:38] duration metric: took 4m6.252445655s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 20:26:38.188078  428967 kubeadm.go:640] restartCluster took 4m25.520295602s
	W1002 20:26:38.188158  428967 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1002 20:26:38.188206  428967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1002 20:26:44.032511  427143 system_pods.go:86] 8 kube-system pods found
	I1002 20:26:44.032545  427143 system_pods.go:89] "coredns-5644d7b6d9-kgdjv" [ffba83fc-5f46-461e-8450-7b5ff343f11b] Running
	I1002 20:26:44.032551  427143 system_pods.go:89] "etcd-old-k8s-version-864077" [d8455a26-c195-4cee-91c5-76a042739321] Running
	I1002 20:26:44.032556  427143 system_pods.go:89] "kube-apiserver-old-k8s-version-864077" [0816935d-ff0b-44d6-b233-6afc85707d81] Running
	I1002 20:26:44.032562  427143 system_pods.go:89] "kube-controller-manager-old-k8s-version-864077" [57191fc2-9500-4115-a253-f7511db21ba1] Running
	I1002 20:26:44.032566  427143 system_pods.go:89] "kube-proxy-tlnwd" [9474acee-85c7-45dd-9570-892cf2a8c1f9] Running
	I1002 20:26:44.032571  427143 system_pods.go:89] "kube-scheduler-old-k8s-version-864077" [10b38f3d-840b-4e4f-b57a-bace6d6c6380] Running
	I1002 20:26:44.032578  427143 system_pods.go:89] "metrics-server-74d5856cc6-hhzk5" [47535270-e92b-4a63-85f2-69f442965bf9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 20:26:44.032585  427143 system_pods.go:89] "storage-provisioner" [26e4eed0-6c83-4806-8a2d-9c0dddaca4fd] Running
	I1002 20:26:44.032593  427143 system_pods.go:126] duration metric: took 1m8.93274307s to wait for k8s-apps to be running ...
	I1002 20:26:44.032600  427143 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 20:26:44.032646  427143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:44.055926  427143 system_svc.go:56] duration metric: took 23.311737ms WaitForService to wait for kubelet.
	I1002 20:26:44.055965  427143 kubeadm.go:581] duration metric: took 1m13.790851338s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 20:26:44.055994  427143 node_conditions.go:102] verifying NodePressure condition ...
	I1002 20:26:44.060166  427143 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 20:26:44.060199  427143 node_conditions.go:123] node cpu capacity is 2
	I1002 20:26:44.060211  427143 node_conditions.go:105] duration metric: took 4.211654ms to run NodePressure ...
	I1002 20:26:44.060226  427143 start.go:228] waiting for startup goroutines ...
	I1002 20:26:44.060234  427143 start.go:233] waiting for cluster config update ...
	I1002 20:26:44.060246  427143 start.go:242] writing updated cluster config ...
	I1002 20:26:44.060694  427143 ssh_runner.go:195] Run: rm -f paused
	I1002 20:26:44.131921  427143 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 20:26:44.135041  427143 out.go:177] 
	W1002 20:26:44.136738  427143 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 20:26:44.138071  427143 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 20:26:44.139659  427143 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-864077" cluster and "default" namespace by default
	I1002 20:26:40.841458  430238 ops.go:34] apiserver oom_adj: -16
	I1002 20:26:40.841606  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:41.062767  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:41.671354  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:42.171398  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:42.671555  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:43.171380  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:43.670787  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:44.170911  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:44.671533  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:45.171579  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:45.670837  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:41.442701  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:41.443180  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:41.443212  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:41.443136  430918 retry.go:31] will retry after 2.361755534s: waiting for machine to come up
	I1002 20:26:43.806509  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:43.807147  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:43.807190  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:43.807095  430918 retry.go:31] will retry after 2.200529551s: waiting for machine to come up
	I1002 20:26:48.483540  428967 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (10.295300806s)
	I1002 20:26:48.483627  428967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:48.498654  428967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:26:48.508544  428967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:26:48.518554  428967 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:26:48.518596  428967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1002 20:26:48.577819  428967 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 20:26:48.577918  428967 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 20:26:48.776776  428967 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:48.776908  428967 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:48.777020  428967 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 20:26:49.158464  428967 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:49.160315  428967 out.go:204]   - Generating certificates and keys ...
	I1002 20:26:49.160480  428967 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 20:26:49.160595  428967 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:49.160701  428967 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:49.161034  428967 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:49.162199  428967 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:49.163746  428967 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:49.165724  428967 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:49.167206  428967 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:49.168587  428967 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:49.169323  428967 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:49.169941  428967 kubeadm.go:322] [certs] Using the existing "sa" key
	I1002 20:26:49.170021  428967 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:49.355750  428967 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:49.860408  428967 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:49.916393  428967 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:50.074637  428967 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:50.075251  428967 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:50.077757  428967 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:46.171391  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:46.671756  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:47.170737  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:47.670959  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:48.171619  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:48.671189  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:49.171299  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:49.671435  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:50.170896  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:50.670817  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:46.009856  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:46.010414  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:46.010446  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:46.010366  430918 retry.go:31] will retry after 3.151554936s: waiting for machine to come up
	I1002 20:26:49.164659  430895 main.go:141] libmachine: (auto-950653) DBG | domain auto-950653 has defined MAC address 52:54:00:08:14:4f in network mk-auto-950653
	I1002 20:26:49.165235  430895 main.go:141] libmachine: (auto-950653) DBG | unable to find current IP address of domain auto-950653 in network mk-auto-950653
	I1002 20:26:49.165269  430895 main.go:141] libmachine: (auto-950653) DBG | I1002 20:26:49.165166  430918 retry.go:31] will retry after 5.111179969s: waiting for machine to come up
	I1002 20:26:50.079284  428967 out.go:204]   - Booting up control plane ...
	I1002 20:26:50.079473  428967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:50.079563  428967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:50.080060  428967 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:50.097526  428967 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:50.097863  428967 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:50.097963  428967 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 20:26:50.222902  428967 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 20:26:51.170829  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:51.670882  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:52.171322  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:52.671173  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:53.171689  430238 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 20:26:53.312691  430238 kubeadm.go:1081] duration metric: took 13.109459193s to wait for elevateKubeSystemPrivileges.
	I1002 20:26:53.312729  430238 kubeadm.go:406] StartCluster complete in 26.138756213s
	I1002 20:26:53.312752  430238 settings.go:142] acquiring lock: {Name:mkb4ca40f1939e3445461ba1faa925717a2f2fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:26:53.312851  430238 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 20:26:53.314465  430238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-390762/kubeconfig: {Name:mk74ddabf197e37062c31902aa8bd3a9b6ce152f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:26:53.314748  430238 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 20:26:53.314892  430238 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 20:26:53.314988  430238 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-418729"
	I1002 20:26:53.315011  430238 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-418729"
	I1002 20:26:53.315016  430238 config.go:182] Loaded profile config "newest-cni-418729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 20:26:53.315027  430238 addons.go:69] Setting default-storageclass=true in profile "newest-cni-418729"
	I1002 20:26:53.315045  430238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-418729"
	I1002 20:26:53.315060  430238 host.go:66] Checking if "newest-cni-418729" exists ...
	I1002 20:26:53.315089  430238 cache.go:107] acquiring lock: {Name:mk78ee4fdb67b48581b3725713190fcd9593134e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:26:53.315151  430238 cache.go:115] /home/jenkins/minikube-integration/17323-390762/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1002 20:26:53.315161  430238 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17323-390762/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 79.255µs
	I1002 20:26:53.315172  430238 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17323-390762/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1002 20:26:53.315179  430238 cache.go:87] Successfully saved all images to host disk.
	I1002 20:26:53.315370  430238 config.go:182] Loaded profile config "newest-cni-418729": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 20:26:53.315536  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.315536  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.315562  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.315565  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.315766  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.315805  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.333910  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I1002 20:26:53.334406  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.334890  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.334911  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.335244  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I1002 20:26:53.335360  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I1002 20:26:53.335522  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.335740  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.335822  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.336105  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.336141  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.336182  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.336231  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.336391  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.336415  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.336630  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.336742  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.336769  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetState
	I1002 20:26:53.336929  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetState
	I1002 20:26:53.339466  430238 addons.go:231] Setting addon default-storageclass=true in "newest-cni-418729"
	I1002 20:26:53.339511  430238 host.go:66] Checking if "newest-cni-418729" exists ...
	I1002 20:26:53.339915  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.339915  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.339935  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.339946  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.356362  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I1002 20:26:53.356983  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.357068  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I1002 20:26:53.357384  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.357695  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.357707  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.357836  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.357853  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.358159  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.358291  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.358668  430238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 20:26:53.358694  430238 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 20:26:53.358954  430238 main.go:141] libmachine: (newest-cni-418729) Calling .DriverName
	I1002 20:26:53.359161  430238 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 20:26:53.359193  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHHostname
	I1002 20:26:53.362837  430238 main.go:141] libmachine: (newest-cni-418729) DBG | domain newest-cni-418729 has defined MAC address 52:54:00:e4:8f:b9 in network mk-newest-cni-418729
	I1002 20:26:53.363316  430238 main.go:141] libmachine: (newest-cni-418729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:b9", ip: ""} in network mk-newest-cni-418729: {Iface:virbr2 ExpiryTime:2023-10-02 21:26:07 +0000 UTC Type:0 Mac:52:54:00:e4:8f:b9 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:newest-cni-418729 Clientid:01:52:54:00:e4:8f:b9}
	I1002 20:26:53.363345  430238 main.go:141] libmachine: (newest-cni-418729) DBG | domain newest-cni-418729 has defined IP address 192.168.50.71 and MAC address 52:54:00:e4:8f:b9 in network mk-newest-cni-418729
	I1002 20:26:53.363486  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHPort
	I1002 20:26:53.363639  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHKeyPath
	I1002 20:26:53.363753  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHUsername
	I1002 20:26:53.363874  430238 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/newest-cni-418729/id_rsa Username:docker}
	I1002 20:26:53.365617  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I1002 20:26:53.365969  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.366441  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.366464  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.366795  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.366974  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetState
	I1002 20:26:53.368490  430238 main.go:141] libmachine: (newest-cni-418729) Calling .DriverName
	I1002 20:26:53.370559  430238 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:26:53.372444  430238 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:26:53.372464  430238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:26:53.372484  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHHostname
	I1002 20:26:53.376320  430238 main.go:141] libmachine: (newest-cni-418729) DBG | domain newest-cni-418729 has defined MAC address 52:54:00:e4:8f:b9 in network mk-newest-cni-418729
	I1002 20:26:53.376766  430238 main.go:141] libmachine: (newest-cni-418729) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:8f:b9", ip: ""} in network mk-newest-cni-418729: {Iface:virbr2 ExpiryTime:2023-10-02 21:26:07 +0000 UTC Type:0 Mac:52:54:00:e4:8f:b9 Iaid: IPaddr:192.168.50.71 Prefix:24 Hostname:newest-cni-418729 Clientid:01:52:54:00:e4:8f:b9}
	I1002 20:26:53.376795  430238 main.go:141] libmachine: (newest-cni-418729) DBG | domain newest-cni-418729 has defined IP address 192.168.50.71 and MAC address 52:54:00:e4:8f:b9 in network mk-newest-cni-418729
	I1002 20:26:53.377024  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHPort
	I1002 20:26:53.377291  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHKeyPath
	I1002 20:26:53.377493  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetSSHUsername
	I1002 20:26:53.377660  430238 sshutil.go:53] new ssh client: &{IP:192.168.50.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/newest-cni-418729/id_rsa Username:docker}
	I1002 20:26:53.389132  430238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38357
	I1002 20:26:53.389507  430238 main.go:141] libmachine: () Calling .GetVersion
	I1002 20:26:53.390017  430238 main.go:141] libmachine: Using API Version  1
	I1002 20:26:53.390039  430238 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 20:26:53.390459  430238 main.go:141] libmachine: () Calling .GetMachineName
	I1002 20:26:53.390664  430238 main.go:141] libmachine: (newest-cni-418729) Calling .GetState
	I1002 20:26:53.392533  430238 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-418729" context rescaled to 1 replicas
	I1002 20:26:53.392567  430238 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.71 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 20:26:53.394439  430238 out.go:177] * Verifying Kubernetes components...
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 20:19:21 UTC, ends at Mon 2023-10-02 20:26:55 UTC. --
	Oct 02 20:25:49 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:25:49.306350673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:25:49 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:25:49.306379847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 20:25:49 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:25:49.306398823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:04.441064187Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:04.441207605Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:04.450757383Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:04.545829527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:04.546207371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:04.546237265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 20:26:04 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:04.546255939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:26:05 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:05.057090445Z" level=info msg="ignoring event" container=ce7fb60ea0e93a51f01e6e88b4d2a7e59bd1b90c433698ceef3471e2645ca9a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:26:05 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:05.060920826Z" level=info msg="shim disconnected" id=ce7fb60ea0e93a51f01e6e88b4d2a7e59bd1b90c433698ceef3471e2645ca9a4 namespace=moby
	Oct 02 20:26:05 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:05.063137368Z" level=warning msg="cleaning up after shim disconnected" id=ce7fb60ea0e93a51f01e6e88b4d2a7e59bd1b90c433698ceef3471e2645ca9a4 namespace=moby
	Oct 02 20:26:05 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:05.063231505Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 20:26:31 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:31.463184920Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:31 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:31.463262001Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:31 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:31.466455966Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:35 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:35.505250473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 20:26:35 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:35.505460848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:26:35 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:35.506132037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 20:26:35 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:35.506242838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 20:26:36 old-k8s-version-864077 dockerd[1084]: time="2023-10-02T20:26:36.013932710Z" level=info msg="ignoring event" container=4b53f1533867b9616569221efe5fe0716b8c5fa974d1ba7b70d2f3060c320746 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 20:26:36 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:36.020282243Z" level=info msg="shim disconnected" id=4b53f1533867b9616569221efe5fe0716b8c5fa974d1ba7b70d2f3060c320746 namespace=moby
	Oct 02 20:26:36 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:36.020711085Z" level=warning msg="cleaning up after shim disconnected" id=4b53f1533867b9616569221efe5fe0716b8c5fa974d1ba7b70d2f3060c320746 namespace=moby
	Oct 02 20:26:36 old-k8s-version-864077 dockerd[1090]: time="2023-10-02T20:26:36.020869256Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	4b53f1533867   a90209bb39e3             "nginx -g 'daemon of…"   20 seconds ago       Exited (1) 19 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard_5f21c646-6748-4d5a-a5ae-bef5b3f54584_3
	47af95735396   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-fg5j5_kubernetes-dashboard_43a0d2c3-81a0-40ce-94c6-1e9e36bfc8b6_0
	779b00a1688e   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-hhzk5_kube-system_47535270-e92b-4a63-85f2-69f442965bf9_0
	6a60c8047c31   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-fg5j5_kubernetes-dashboard_43a0d2c3-81a0-40ce-94c6-1e9e36bfc8b6_0
	bd2b0f0944ce   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard_5f21c646-6748-4d5a-a5ae-bef5b3f54584_0
	f1926558563b   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_26e4eed0-6c83-4806-8a2d-9c0dddaca4fd_0
	ce5e73ec2832   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-kgdjv_kube-system_ffba83fc-5f46-461e-8450-7b5ff343f11b_0
	a4d965f3dceb   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_26e4eed0-6c83-4806-8a2d-9c0dddaca4fd_0
	a6279f54e55f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-kgdjv_kube-system_ffba83fc-5f46-461e-8450-7b5ff343f11b_0
	5b1ee6f514c7   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-tlnwd_kube-system_9474acee-85c7-45dd-9570-892cf2a8c1f9_0
	a54d04e890ae   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-tlnwd_kube-system_9474acee-85c7-45dd-9570-892cf2a8c1f9_0
	d7e41a1bd713   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-864077_kube-system_2184e7ad84642c8d8f6e23ff2fd1effe_0
	8e524ee86fa4   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-864077_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	fdd5487c07ff   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-864077_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	587acca83733   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-864077_kube-system_014e58430ca69d3399210551b7651e3a_0
	82306732b543   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-864077_kube-system_2184e7ad84642c8d8f6e23ff2fd1effe_0
	c94c264e7284   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-864077_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	def01046b14b   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-864077_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	61e11b5af7d1   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-864077_kube-system_014e58430ca69d3399210551b7651e3a_0
	time="2023-10-02T20:26:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [ce5e73ec2832] <==
	* .:53
	2023-10-02T20:25:33.579Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-02T20:25:33.579Z [INFO] CoreDNS-1.6.2
	2023-10-02T20:25:33.579Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-02T20:26:03.000Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	[INFO] Reloading complete
	2023-10-02T20:26:03.025Z [INFO] 127.0.0.1:53890 - 49726 "HINFO IN 5889362922656335791.4716625760192441546. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025220733s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-864077
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-864077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=old-k8s-version-864077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T20_25_14_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 20:25:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 20:26:09 +0000   Mon, 02 Oct 2023 20:25:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 20:26:09 +0000   Mon, 02 Oct 2023 20:25:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 20:26:09 +0000   Mon, 02 Oct 2023 20:25:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 20:26:09 +0000   Mon, 02 Oct 2023 20:25:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.146
	  Hostname:    old-k8s-version-864077
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 f6bd34e8ad9b44908c8953a9dddd4189
	 System UUID:                f6bd34e8-ad9b-4490-8c89-53a9dddd4189
	 Boot ID:                    f1f8f98e-c77e-410a-b161-8f4a058f307d
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-kgdjv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-864077                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                kube-apiserver-old-k8s-version-864077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                kube-controller-manager-old-k8s-version-864077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                kube-proxy-tlnwd                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                kube-scheduler-old-k8s-version-864077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                metrics-server-74d5856cc6-hhzk5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-vlrgl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-fg5j5             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeAllocatableEnforced  113s                 kubelet, old-k8s-version-864077     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  112s (x8 over 113s)  kubelet, old-k8s-version-864077     Node old-k8s-version-864077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 113s)  kubelet, old-k8s-version-864077     Node old-k8s-version-864077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x7 over 113s)  kubelet, old-k8s-version-864077     Node old-k8s-version-864077 status is now: NodeHasSufficientPID
	  Normal  Starting                 83s                  kube-proxy, old-k8s-version-864077  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 2 20:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072214] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.620886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.809938] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158170] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.684894] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.891252] systemd-fstab-generator[515]: Ignoring "noauto" for root device
	[  +0.150024] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.361602] systemd-fstab-generator[794]: Ignoring "noauto" for root device
	[  +0.409430] systemd-fstab-generator[831]: Ignoring "noauto" for root device
	[  +0.120595] systemd-fstab-generator[842]: Ignoring "noauto" for root device
	[  +0.151217] systemd-fstab-generator[855]: Ignoring "noauto" for root device
	[  +6.224630] systemd-fstab-generator[1075]: Ignoring "noauto" for root device
	[  +3.527060] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.065437] systemd-fstab-generator[1487]: Ignoring "noauto" for root device
	[  +0.449635] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.172387] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 20:20] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 20:25] systemd-fstab-generator[5569]: Ignoring "noauto" for root device
	[ +48.316001] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [d7e41a1bd713] <==
	* 2023-10-02 20:25:04.995112 I | etcdserver: starting member 3acad48fd10060b5 in cluster 44a3b1a5b956418a
	2023-10-02 20:25:04.995215 I | raft: 3acad48fd10060b5 became follower at term 0
	2023-10-02 20:25:04.995224 I | raft: newRaft 3acad48fd10060b5 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-02 20:25:04.995227 I | raft: 3acad48fd10060b5 became follower at term 1
	2023-10-02 20:25:05.025727 W | auth: simple token is not cryptographically signed
	2023-10-02 20:25:05.077462 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-02 20:25:05.153464 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 20:25:05.213068 I | etcdserver: 3acad48fd10060b5 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 20:25:05.350166 I | embed: listening for metrics on http://192.168.83.146:2381
	2023-10-02 20:25:05.411019 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-02 20:25:05.416039 I | etcdserver/membership: added member 3acad48fd10060b5 [https://192.168.83.146:2380] to cluster 44a3b1a5b956418a
	2023-10-02 20:25:05.795795 I | raft: 3acad48fd10060b5 is starting a new election at term 1
	2023-10-02 20:25:05.795915 I | raft: 3acad48fd10060b5 became candidate at term 2
	2023-10-02 20:25:05.796015 I | raft: 3acad48fd10060b5 received MsgVoteResp from 3acad48fd10060b5 at term 2
	2023-10-02 20:25:05.796050 I | raft: 3acad48fd10060b5 became leader at term 2
	2023-10-02 20:25:05.796068 I | raft: raft.node: 3acad48fd10060b5 elected leader 3acad48fd10060b5 at term 2
	2023-10-02 20:25:05.796352 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-02 20:25:05.797768 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-02 20:25:05.797921 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-02 20:25:05.798042 I | etcdserver: published {Name:old-k8s-version-864077 ClientURLs:[https://192.168.83.146:2379]} to cluster 44a3b1a5b956418a
	2023-10-02 20:25:05.798508 I | embed: ready to serve client requests
	2023-10-02 20:25:05.799086 I | embed: ready to serve client requests
	2023-10-02 20:25:05.801052 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 20:25:05.804940 I | embed: serving client requests on 192.168.83.146:2379
	2023-10-02 20:25:48.257356 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:752" took too long (122.687846ms) to execute
	
	* 
	* ==> kernel <==
	*  20:26:56 up 7 min,  0 users,  load average: 0.92, 0.59, 0.26
	Linux old-k8s-version-864077 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [587acca83733] <==
	* I1002 20:25:10.176890       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 20:25:10.183361       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1002 20:25:10.194175       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1002 20:25:10.194194       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1002 20:25:11.966535       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 20:25:12.246475       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 20:25:12.597942       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.83.146]
	I1002 20:25:12.598908       1 controller.go:606] quota admission added evaluator for: endpoints
	I1002 20:25:12.649587       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 20:25:13.494014       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1002 20:25:14.124590       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1002 20:25:14.430574       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1002 20:25:29.662113       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1002 20:25:29.718234       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1002 20:25:29.759865       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1002 20:25:34.475179       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 20:25:34.475384       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:25:34.475627       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 20:25:34.475639       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 20:26:34.475939       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 20:26:34.476135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 20:26:34.476186       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 20:26:34.476194       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [fdd5487c07ff] <==
	* I1002 20:25:33.530549       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"428", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.554467       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.558406       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.558733       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"9f0287a6-1082-4400-b993-d76d80d3098f", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.603187       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.604718       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.604735       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.607224       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"9f0287a6-1082-4400-b993-d76d80d3098f", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.641151       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.641236       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"9f0287a6-1082-4400-b993-d76d80d3098f", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.642509       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.642514       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.702795       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.703159       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.703351       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.703401       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"9f0287a6-1082-4400-b993-d76d80d3098f", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 20:25:33.733905       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.733906       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 20:25:33.880180       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"9f0287a6-1082-4400-b993-d76d80d3098f", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-vlrgl
	I1002 20:25:33.957400       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"203d7d94-6376-4334-9f08-9498ac891bfa", APIVersion:"apps/v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-hhzk5
	I1002 20:25:34.775445       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"271bf0b0-a474-49a3-acc3-ff45243d631e", APIVersion:"apps/v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-fg5j5
	E1002 20:26:00.380472       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 20:26:02.212727       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 20:26:30.632710       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 20:26:34.215346       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [5b1ee6f514c7] <==
	* W1002 20:25:32.214383       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1002 20:25:32.454354       1 node.go:135] Successfully retrieved node IP: 192.168.83.146
	I1002 20:25:32.454438       1 server_others.go:149] Using iptables Proxier.
	I1002 20:25:32.493840       1 server.go:529] Version: v1.16.0
	I1002 20:25:32.521685       1 config.go:313] Starting service config controller
	I1002 20:25:32.521754       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1002 20:25:32.521812       1 config.go:131] Starting endpoints config controller
	I1002 20:25:32.521825       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1002 20:25:32.627505       1 shared_informer.go:204] Caches are synced for service config 
	I1002 20:25:32.627750       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [8e524ee86fa4] <==
	* I1002 20:25:09.265644       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1002 20:25:09.266589       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1002 20:25:09.333143       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 20:25:09.341305       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 20:25:09.351604       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 20:25:09.352037       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 20:25:09.352081       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 20:25:09.352147       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 20:25:09.362366       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 20:25:09.362524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 20:25:09.362860       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 20:25:09.363257       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 20:25:09.363618       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 20:25:10.335047       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 20:25:10.343690       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 20:25:10.354735       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 20:25:10.356602       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 20:25:10.356919       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 20:25:10.359104       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 20:25:10.363610       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 20:25:10.365615       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 20:25:10.367900       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 20:25:10.370708       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 20:25:10.370815       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 20:25:29.769824       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 20:19:21 UTC, ends at Mon 2023-10-02 20:26:56 UTC. --
	Oct 02 20:25:49 old-k8s-version-864077 kubelet[5575]: E1002 20:25:49.231495    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:25:49 old-k8s-version-864077 kubelet[5575]: W1002 20:25:49.741181    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-fg5j5 through plugin: invalid network status for
	Oct 02 20:25:49 old-k8s-version-864077 kubelet[5575]: E1002 20:25:49.747360    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 20:25:52 old-k8s-version-864077 kubelet[5575]: E1002 20:25:52.830048    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:04 old-k8s-version-864077 kubelet[5575]: E1002 20:26:04.451508    5575 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:04 old-k8s-version-864077 kubelet[5575]: E1002 20:26:04.452171    5575 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:04 old-k8s-version-864077 kubelet[5575]: E1002 20:26:04.452520    5575 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:04 old-k8s-version-864077 kubelet[5575]: E1002 20:26:04.452738    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:04 old-k8s-version-864077 kubelet[5575]: W1002 20:26:04.869583    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-vlrgl through plugin: invalid network status for
	Oct 02 20:26:06 old-k8s-version-864077 kubelet[5575]: W1002 20:26:06.001339    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-vlrgl through plugin: invalid network status for
	Oct 02 20:26:06 old-k8s-version-864077 kubelet[5575]: E1002 20:26:06.007771    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:07 old-k8s-version-864077 kubelet[5575]: W1002 20:26:07.015877    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-vlrgl through plugin: invalid network status for
	Oct 02 20:26:12 old-k8s-version-864077 kubelet[5575]: E1002 20:26:12.830203    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:17 old-k8s-version-864077 kubelet[5575]: E1002 20:26:17.409014    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 20:26:23 old-k8s-version-864077 kubelet[5575]: E1002 20:26:23.406707    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:31 old-k8s-version-864077 kubelet[5575]: E1002 20:26:31.467205    5575 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:31 old-k8s-version-864077 kubelet[5575]: E1002 20:26:31.467244    5575 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:31 old-k8s-version-864077 kubelet[5575]: E1002 20:26:31.467283    5575 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 20:26:31 old-k8s-version-864077 kubelet[5575]: E1002 20:26:31.467311    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 20:26:36 old-k8s-version-864077 kubelet[5575]: W1002 20:26:36.246798    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-vlrgl through plugin: invalid network status for
	Oct 02 20:26:36 old-k8s-version-864077 kubelet[5575]: E1002 20:26:36.254675    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:37 old-k8s-version-864077 kubelet[5575]: W1002 20:26:37.281557    5575 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-vlrgl through plugin: invalid network status for
	Oct 02 20:26:42 old-k8s-version-864077 kubelet[5575]: E1002 20:26:42.412594    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 20:26:42 old-k8s-version-864077 kubelet[5575]: E1002 20:26:42.832064    5575 pod_workers.go:191] Error syncing pod 5f21c646-6748-4d5a-a5ae-bef5b3f54584 ("dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-vlrgl_kubernetes-dashboard(5f21c646-6748-4d5a-a5ae-bef5b3f54584)"
	Oct 02 20:26:54 old-k8s-version-864077 kubelet[5575]: E1002 20:26:54.408365    5575 pod_workers.go:191] Error syncing pod 47535270-e92b-4a63-85f2-69f442965bf9 ("metrics-server-74d5856cc6-hhzk5_kube-system(47535270-e92b-4a63-85f2-69f442965bf9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [47af95735396] <==
	* 2023/10/02 20:25:49 Using namespace: kubernetes-dashboard
	2023/10/02 20:25:49 Using in-cluster config to connect to apiserver
	2023/10/02 20:25:49 Using secret token for csrf signing
	2023/10/02 20:25:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/02 20:25:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/02 20:25:49 Successful initial request to the apiserver, version: v1.16.0
	2023/10/02 20:25:49 Generating JWE encryption key
	2023/10/02 20:25:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/02 20:25:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/02 20:25:49 Initializing JWE encryption key from synchronized object
	2023/10/02 20:25:49 Creating in-cluster Sidecar client
	2023/10/02 20:25:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 20:25:49 Serving insecurely on HTTP port: 9090
	2023/10/02 20:26:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 20:26:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 20:25:49 Starting overwatch
	
	* 
	* ==> storage-provisioner [f1926558563b] <==
	* I1002 20:25:33.831393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 20:25:33.927134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 20:25:33.933652       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 20:25:34.002264       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 20:25:34.004333       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-864077_545c9f07-69ef-4196-b92f-2722321c3f5f!
	I1002 20:25:34.006187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"16dca8be-9d78-4092-9c13-e0857c174fc5", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-864077_545c9f07-69ef-4196-b92f-2722321c3f5f became leader
	I1002 20:25:34.113314       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-864077_545c9f07-69ef-4196-b92f-2722321c3f5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-864077 -n old-k8s-version-864077
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-864077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-hhzk5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-864077 describe pod metrics-server-74d5856cc6-hhzk5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-864077 describe pod metrics-server-74d5856cc6-hhzk5: exit status 1 (97.093592ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-hhzk5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-864077 describe pod metrics-server-74d5856cc6-hhzk5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.23s)

                                                
                                    

Test pass (285/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.2/json-events 4.31
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.54
20 TestOffline 107.72
22 TestAddons/Setup 147.7
24 TestAddons/parallel/Registry 16.24
25 TestAddons/parallel/Ingress 21.93
26 TestAddons/parallel/InspektorGadget 10.88
27 TestAddons/parallel/MetricsServer 5.96
28 TestAddons/parallel/HelmTiller 22.07
30 TestAddons/parallel/CSI 64.17
31 TestAddons/parallel/Headlamp 18.34
32 TestAddons/parallel/CloudSpanner 5.7
33 TestAddons/parallel/LocalPath 57.8
36 TestAddons/serial/GCPAuth/Namespaces 0.13
37 TestAddons/StoppedEnableDisable 13.35
38 TestCertOptions 105.12
39 TestCertExpiration 310.77
40 TestDockerFlags 70.35
41 TestForceSystemdFlag 59.42
42 TestForceSystemdEnv 110.88
44 TestKVMDriverInstallOrUpdate 3.69
48 TestErrorSpam/setup 53.03
49 TestErrorSpam/start 0.34
50 TestErrorSpam/status 0.74
51 TestErrorSpam/pause 1.17
52 TestErrorSpam/unpause 1.33
53 TestErrorSpam/stop 4.21
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 66.43
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 43.22
60 TestFunctional/serial/KubeContext 0.05
61 TestFunctional/serial/KubectlGetPods 0.08
64 TestFunctional/serial/CacheCmd/cache/add_remote 2.3
65 TestFunctional/serial/CacheCmd/cache/add_local 1.34
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
67 TestFunctional/serial/CacheCmd/cache/list 0.04
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
70 TestFunctional/serial/CacheCmd/cache/delete 0.11
71 TestFunctional/serial/MinikubeKubectlCmd 0.1
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 39.75
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.14
76 TestFunctional/serial/LogsFileCmd 1.15
77 TestFunctional/serial/InvalidService 5.29
79 TestFunctional/parallel/ConfigCmd 0.28
80 TestFunctional/parallel/DashboardCmd 18.98
81 TestFunctional/parallel/DryRun 0.28
82 TestFunctional/parallel/InternationalLanguage 0.14
83 TestFunctional/parallel/StatusCmd 0.91
87 TestFunctional/parallel/ServiceCmdConnect 8.66
88 TestFunctional/parallel/AddonsCmd 0.13
89 TestFunctional/parallel/PersistentVolumeClaim 57.17
91 TestFunctional/parallel/SSHCmd 0.42
92 TestFunctional/parallel/CpCmd 0.99
93 TestFunctional/parallel/MySQL 38.19
94 TestFunctional/parallel/FileSync 0.21
95 TestFunctional/parallel/CertSync 1.32
99 TestFunctional/parallel/NodeLabels 0.06
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
103 TestFunctional/parallel/License 0.2
104 TestFunctional/parallel/ServiceCmd/DeployApp 13.24
105 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
106 TestFunctional/parallel/ProfileCmd/profile_list 0.3
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
108 TestFunctional/parallel/MountCmd/any-port 9.9
109 TestFunctional/parallel/MountCmd/specific-port 1.84
110 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
111 TestFunctional/parallel/ServiceCmd/List 0.45
112 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
113 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
114 TestFunctional/parallel/ServiceCmd/Format 0.35
115 TestFunctional/parallel/ServiceCmd/URL 0.44
125 TestFunctional/parallel/DockerEnv/bash 1.11
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.77
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.29
133 TestFunctional/parallel/ImageCommands/Setup 1.27
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.31
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.73
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.12
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.38
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.28
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.55
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.01
147 TestGvisorAddon 292.54
150 TestImageBuild/serial/Setup 52.19
151 TestImageBuild/serial/NormalBuild 1.52
152 TestImageBuild/serial/BuildWithBuildArg 1.28
153 TestImageBuild/serial/BuildWithDockerIgnore 0.43
154 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
157 TestIngressAddonLegacy/StartLegacyK8sCluster 107.28
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.44
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.06
164 TestJSONOutput/start/Command 67.27
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.57
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.52
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 8.08
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.18
192 TestMainNoArgs 0.04
193 TestMinikubeProfile 105.71
196 TestMountStart/serial/StartWithMountFirst 31.11
197 TestMountStart/serial/VerifyMountFirst 0.36
198 TestMountStart/serial/StartWithMountSecond 29.77
199 TestMountStart/serial/VerifyMountSecond 0.38
200 TestMountStart/serial/DeleteFirst 0.9
201 TestMountStart/serial/VerifyMountPostDelete 0.38
202 TestMountStart/serial/Stop 2.09
203 TestMountStart/serial/RestartStopped 26.79
204 TestMountStart/serial/VerifyMountPostStop 0.38
207 TestMultiNode/serial/FreshStart2Nodes 126.11
208 TestMultiNode/serial/DeployApp2Nodes 4.11
209 TestMultiNode/serial/PingHostFrom2Pods 0.85
210 TestMultiNode/serial/AddNode 47.12
211 TestMultiNode/serial/ProfileList 0.21
212 TestMultiNode/serial/CopyFile 7.17
213 TestMultiNode/serial/StopNode 3.95
215 TestMultiNode/serial/RestartKeepsNodes 267.1
216 TestMultiNode/serial/DeleteNode 1.73
217 TestMultiNode/serial/StopMultiNode 25.6
218 TestMultiNode/serial/RestartMultiNode 116.66
219 TestMultiNode/serial/ValidateNameConflict 52.89
224 TestPreload 205.81
226 TestScheduledStopUnix 123.25
227 TestSkaffold 145.38
230 TestRunningBinaryUpgrade 203.36
232 TestKubernetesUpgrade 224.08
234 TestStoppedBinaryUpgrade/Setup 0.49
235 TestStoppedBinaryUpgrade/Upgrade 213.83
244 TestPause/serial/Start 79.05
245 TestPause/serial/SecondStartNoReconfiguration 47.91
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.62
247 TestPause/serial/Pause 0.57
248 TestPause/serial/VerifyStatus 0.26
249 TestPause/serial/Unpause 0.57
250 TestPause/serial/PauseAgain 0.79
251 TestPause/serial/DeletePaused 1.88
252 TestPause/serial/VerifyDeletedResources 0.29
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
266 TestNoKubernetes/serial/StartWithK8s 102.2
267 TestNoKubernetes/serial/StartWithStopK8s 38.09
269 TestStartStop/group/old-k8s-version/serial/FirstStart 161.3
270 TestNoKubernetes/serial/Start 60.51
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
272 TestNoKubernetes/serial/ProfileList 1.32
273 TestNoKubernetes/serial/Stop 2.21
274 TestNoKubernetes/serial/StartNoArgs 26.88
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestStartStop/group/no-preload/serial/FirstStart 118.19
279 TestStartStop/group/embed-certs/serial/FirstStart 75.57
280 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
281 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
282 TestStartStop/group/old-k8s-version/serial/Stop 13.33
283 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
284 TestStartStop/group/old-k8s-version/serial/SecondStart 464.78
285 TestStartStop/group/embed-certs/serial/DeployApp 9.57
286 TestStartStop/group/no-preload/serial/DeployApp 11.46
287 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.36
288 TestStartStop/group/embed-certs/serial/Stop 13.11
289 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.39
290 TestStartStop/group/no-preload/serial/Stop 13.12
291 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
292 TestStartStop/group/embed-certs/serial/SecondStart 333.94
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.41
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
296 TestStartStop/group/no-preload/serial/SecondStart 381
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.54
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.11
300 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 330.52
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
305 TestStartStop/group/embed-certs/serial/Pause 2.76
307 TestStartStop/group/newest-cni/serial/FirstStart 74.68
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
311 TestStartStop/group/no-preload/serial/Pause 2.63
312 TestNetworkPlugins/group/auto/Start 110.56
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/old-k8s-version/serial/Pause 2.85
317 TestNetworkPlugins/group/kindnet/Start 85.59
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.32
320 TestStartStop/group/newest-cni/serial/Stop 13.15
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 22.03
322 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
323 TestStartStop/group/newest-cni/serial/SecondStart 62.25
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
327 TestNetworkPlugins/group/calico/Start 133.7
328 TestNetworkPlugins/group/auto/KubeletFlags 0.22
329 TestNetworkPlugins/group/auto/NetCatPod 13.35
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
333 TestStartStop/group/newest-cni/serial/Pause 2.76
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
335 TestNetworkPlugins/group/custom-flannel/Start 85.26
336 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
337 TestNetworkPlugins/group/kindnet/NetCatPod 14.38
338 TestNetworkPlugins/group/auto/DNS 0.23
339 TestNetworkPlugins/group/auto/Localhost 0.19
340 TestNetworkPlugins/group/auto/HairPin 0.19
341 TestNetworkPlugins/group/kindnet/DNS 0.25
342 TestNetworkPlugins/group/kindnet/Localhost 0.19
343 TestNetworkPlugins/group/kindnet/HairPin 0.2
344 TestNetworkPlugins/group/false/Start 119.28
345 TestNetworkPlugins/group/enable-default-cni/Start 103.23
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.38
348 TestNetworkPlugins/group/custom-flannel/DNS 0.28
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
350 TestNetworkPlugins/group/calico/ControllerPod 5.03
351 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
352 TestNetworkPlugins/group/calico/KubeletFlags 0.24
353 TestNetworkPlugins/group/calico/NetCatPod 14.65
354 TestNetworkPlugins/group/flannel/Start 87.53
355 TestNetworkPlugins/group/calico/DNS 0.31
356 TestNetworkPlugins/group/calico/Localhost 0.2
357 TestNetworkPlugins/group/calico/HairPin 0.17
358 TestNetworkPlugins/group/bridge/Start 90.27
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.3
361 TestNetworkPlugins/group/false/KubeletFlags 0.27
362 TestNetworkPlugins/group/false/NetCatPod 13.38
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
366 TestNetworkPlugins/group/false/DNS 0.25
367 TestNetworkPlugins/group/false/Localhost 0.17
368 TestNetworkPlugins/group/false/HairPin 0.16
369 TestNetworkPlugins/group/kubenet/Start 110.09
370 TestNetworkPlugins/group/flannel/ControllerPod 5.02
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
372 TestNetworkPlugins/group/flannel/NetCatPod 13.46
373 TestNetworkPlugins/group/flannel/DNS 0.23
374 TestNetworkPlugins/group/flannel/Localhost 0.19
375 TestNetworkPlugins/group/flannel/HairPin 0.18
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
377 TestNetworkPlugins/group/bridge/NetCatPod 12.3
378 TestNetworkPlugins/group/bridge/DNS 21.77
379 TestNetworkPlugins/group/bridge/Localhost 0.17
380 TestNetworkPlugins/group/bridge/HairPin 0.15
381 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
382 TestNetworkPlugins/group/kubenet/NetCatPod 10.3
383 TestNetworkPlugins/group/kubenet/DNS 0.18
384 TestNetworkPlugins/group/kubenet/Localhost 0.14
385 TestNetworkPlugins/group/kubenet/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246838 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246838 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (6.900028563s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246838
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246838: exit status 85 (57.904529ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246838 | jenkins | v1.31.2 | 02 Oct 23 19:32 UTC |          |
	|         | -p download-only-246838        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 19:32:39
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:32:39.802520  398007 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:32:39.802631  398007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:32:39.802640  398007 out.go:309] Setting ErrFile to fd 2...
	I1002 19:32:39.802644  398007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:32:39.802844  398007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	W1002 19:32:39.802983  398007 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17323-390762/.minikube/config/config.json: open /home/jenkins/minikube-integration/17323-390762/.minikube/config/config.json: no such file or directory
	I1002 19:32:39.803536  398007 out.go:303] Setting JSON to true
	I1002 19:32:39.804687  398007 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8111,"bootTime":1696267049,"procs":447,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:32:39.804743  398007 start.go:138] virtualization: kvm guest
	I1002 19:32:39.807159  398007 out.go:97] [download-only-246838] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:32:39.808768  398007 out.go:169] MINIKUBE_LOCATION=17323
	W1002 19:32:39.807275  398007 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 19:32:39.807322  398007 notify.go:220] Checking for updates...
	I1002 19:32:39.811758  398007 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:32:39.813166  398007 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:32:39.814611  398007 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:32:39.815965  398007 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 19:32:39.818408  398007 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:32:39.818666  398007 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:32:39.850461  398007 out.go:97] Using the kvm2 driver based on user configuration
	I1002 19:32:39.850483  398007 start.go:298] selected driver: kvm2
	I1002 19:32:39.850488  398007 start.go:902] validating driver "kvm2" against <nil>
	I1002 19:32:39.850803  398007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:32:39.850887  398007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17323-390762/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:32:39.864810  398007 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 19:32:39.864862  398007 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 19:32:39.865278  398007 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1002 19:32:39.865414  398007 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:32:39.865461  398007 cni.go:84] Creating CNI manager for ""
	I1002 19:32:39.865495  398007 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 19:32:39.865502  398007 start_flags.go:321] config:
	{Name:download-only-246838 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-246838 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:32:39.865673  398007 iso.go:125] acquiring lock: {Name:mkbfe48e1980de2c6c14998e378eaaa3f660e151 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:32:39.867511  398007 out.go:97] Downloading VM boot image ...
	I1002 19:32:39.867554  398007 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 19:32:42.869473  398007 out.go:97] Starting control plane node download-only-246838 in cluster download-only-246838
	I1002 19:32:42.869496  398007 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 19:32:42.892875  398007 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1002 19:32:42.892917  398007 cache.go:57] Caching tarball of preloaded images
	I1002 19:32:42.893060  398007 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 19:32:42.894767  398007 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 19:32:42.894784  398007 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1002 19:32:42.927596  398007 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17323-390762/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-246838"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (4.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246838 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246838 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (4.311667202s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (4.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246838
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246838: exit status 85 (53.533729ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-246838 | jenkins | v1.31.2 | 02 Oct 23 19:32 UTC |          |
	|         | -p download-only-246838        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-246838 | jenkins | v1.31.2 | 02 Oct 23 19:32 UTC |          |
	|         | -p download-only-246838        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 19:32:46
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:32:46.760764  398072 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:32:46.761046  398072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:32:46.761058  398072 out.go:309] Setting ErrFile to fd 2...
	I1002 19:32:46.761064  398072 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:32:46.761247  398072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	W1002 19:32:46.761375  398072 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17323-390762/.minikube/config/config.json: open /home/jenkins/minikube-integration/17323-390762/.minikube/config/config.json: no such file or directory
	I1002 19:32:46.761811  398072 out.go:303] Setting JSON to true
	I1002 19:32:46.762950  398072 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8118,"bootTime":1696267049,"procs":443,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:32:46.763012  398072 start.go:138] virtualization: kvm guest
	I1002 19:32:46.765019  398072 out.go:97] [download-only-246838] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:32:46.766594  398072 out.go:169] MINIKUBE_LOCATION=17323
	I1002 19:32:46.765221  398072 notify.go:220] Checking for updates...
	I1002 19:32:46.769483  398072 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:32:46.770931  398072 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:32:46.772328  398072 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:32:46.773633  398072 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-246838"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-246838
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-467718 --alsologtostderr --binary-mirror http://127.0.0.1:40953 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-467718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-467718
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (107.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-220277 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-220277 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m46.64256645s)
helpers_test.go:175: Cleaning up "offline-docker-220277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-220277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-220277: (1.072170956s)
--- PASS: TestOffline (107.72s)

                                                
                                    
x
+
TestAddons/Setup (147.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p addons-169812 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p addons-169812 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.697318832s)
--- PASS: TestAddons/Setup (147.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 16.227145ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-v8mbp" [5630459f-eeee-4d30-9d57-aea33413f7c3] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.026303326s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v7ld4" [da3cfe45-ab82-48f5-9d53-8d9b544f2afe] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.024862198s
addons_test.go:318: (dbg) Run:  kubectl --context addons-169812 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-169812 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-169812 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.417305009s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 ip
2023/10/02 19:35:35 [DEBUG] GET http://192.168.39.144:5000
addons_test.go:366: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.24s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-169812 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-169812 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-169812 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [87d291a4-ec63-403d-bb3c-7e7c533664cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [87d291a4-ec63-403d-bb3c-7e7c533664cc] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.022074803s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context addons-169812 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.144
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p addons-169812 addons disable ingress-dns --alsologtostderr -v=1: (1.17778005s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p addons-169812 addons disable ingress --alsologtostderr -v=1: (7.897702587s)
--- PASS: TestAddons/parallel/Ingress (21.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nxfbg" [bb77f3d2-a595-455b-bc0f-95beddc29485] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.018237375s
addons_test.go:819: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-169812
addons_test.go:819: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-169812: (5.863130431s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 3.859852ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xsrhl" [78057db1-d2e5-4e73-84d0-a8b32bf58213] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018216787s
addons_test.go:393: (dbg) Run:  kubectl --context addons-169812 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (22.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 3.445826ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7mvjx" [de741796-818e-47e9-ac76-77a804a8d42b] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.016455232s
addons_test.go:451: (dbg) Run:  kubectl --context addons-169812 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-169812 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (12.06094843s)
addons_test.go:456: kubectl --context addons-169812 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:451: (dbg) Run:  kubectl --context addons-169812 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-169812 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.840641977s)
addons_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (22.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 6.608968ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-169812 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-169812 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [166c8f76-2dbc-4d35-a9bb-39cb61d79d2a] Pending
helpers_test.go:344: "task-pv-pod" [166c8f76-2dbc-4d35-a9bb-39cb61d79d2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [166c8f76-2dbc-4d35-a9bb-39cb61d79d2a] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.037076968s
addons_test.go:562: (dbg) Run:  kubectl --context addons-169812 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-169812 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-169812 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-169812 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-169812 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-169812 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-169812 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-169812 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d0fc40bf-9985-4c24-ba7a-e0f3e58e7c4e] Pending
helpers_test.go:344: "task-pv-pod-restore" [d0fc40bf-9985-4c24-ba7a-e0f3e58e7c4e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d0fc40bf-9985-4c24-ba7a-e0f3e58e7c4e] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.030294778s
addons_test.go:604: (dbg) Run:  kubectl --context addons-169812 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-169812 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-169812 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-169812 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.819444815s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-169812 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-169812 --alsologtostderr -v=1: (1.284073883s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-75wcg" [63bb6059-9b3d-46f2-947c-564d99caa7ac] Pending
helpers_test.go:344: "headlamp-58b88cff49-75wcg" [63bb6059-9b3d-46f2-947c-564d99caa7ac] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-75wcg" [63bb6059-9b3d-46f2-947c-564d99caa7ac] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.059221872s
--- PASS: TestAddons/parallel/Headlamp (18.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-pgj2z" [f00358d9-98d1-4b3e-8b82-eb469eaebb55] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014027531s
addons_test.go:838: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-169812
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-169812 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-169812 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dd34b7b3-8891-4845-b00f-2adad4b9078c] Pending
helpers_test.go:344: "test-local-path" [dd34b7b3-8891-4845-b00f-2adad4b9078c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dd34b7b3-8891-4845-b00f-2adad4b9078c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dd34b7b3-8891-4845-b00f-2adad4b9078c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.016571078s
addons_test.go:869: (dbg) Run:  kubectl --context addons-169812 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 ssh "cat /opt/local-path-provisioner/pvc-59e56529-6f78-4efa-989b-9350e38a7470_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-169812 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-169812 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-169812 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-amd64 -p addons-169812 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.158408461s)
--- PASS: TestAddons/parallel/LocalPath (57.80s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-169812 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-169812 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-169812
addons_test.go:150: (dbg) Done: out/minikube-linux-amd64 stop -p addons-169812: (13.096477669s)
addons_test.go:154: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-169812
addons_test.go:158: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-169812
addons_test.go:163: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-169812
--- PASS: TestAddons/StoppedEnableDisable (13.35s)

                                                
                                    
x
+
TestCertOptions (105.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-233648 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1002 20:13:22.743627  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-233648 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m42.786130687s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-233648 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-233648 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-233648 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-233648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-233648
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-233648: (1.800280601s)
--- PASS: TestCertOptions (105.12s)

                                                
                                    
x
+
TestCertExpiration (310.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-859458 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-859458 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m32.1584693s)
E1002 20:14:41.280481  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:47.100728  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-859458 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-859458 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (37.389924092s)
helpers_test.go:175: Cleaning up "cert-expiration-859458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-859458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-859458: (1.223672651s)
--- PASS: TestCertExpiration (310.77s)

                                                
                                    
x
+
TestDockerFlags (70.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-031049 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-031049 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m8.945330148s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-031049 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-031049 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-031049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-031049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-031049: (1.003616804s)
--- PASS: TestDockerFlags (70.35s)

                                                
                                    
x
+
TestForceSystemdFlag (59.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-515164 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-515164 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (58.212708488s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-515164 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-515164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-515164
--- PASS: TestForceSystemdFlag (59.42s)

                                                
                                    
x
+
TestForceSystemdEnv (110.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-732798 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1002 20:14:00.317712  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.322986  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.333245  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.353527  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.393902  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.474274  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.634715  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:00.955417  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:01.596370  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:02.877281  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:05.437842  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:10.559016  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:14:20.800165  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-732798 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m49.507794448s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-732798 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-732798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-732798
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-732798: (1.072744076s)
--- PASS: TestForceSystemdEnv (110.88s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.69s)

                                                
                                    
x
+
TestErrorSpam/setup (53.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-077530 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-077530 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-077530 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-077530 --driver=kvm2 : (53.029808266s)
--- PASS: TestErrorSpam/setup (53.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 pause
--- PASS: TestErrorSpam/pause (1.17s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (4.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 stop: (4.079124018s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077530 --log_dir /tmp/nospam-077530 stop
--- PASS: TestErrorSpam/stop (4.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17323-390762/.minikube/files/etc/test/nested/copy/397995/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-000083 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m6.42620762s)
--- PASS: TestFunctional/serial/StartWithProxy (66.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-000083 --alsologtostderr -v=8: (43.219895271s)
functional_test.go:659: soft start took 43.220600885s for "functional-000083" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-000083 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-000083 /tmp/TestFunctionalserialCacheCmdcacheadd_local2463826060/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache add minikube-local-cache-test:functional-000083
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 cache add minikube-local-cache-test:functional-000083: (1.040094404s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache delete minikube-local-cache-test:functional-000083
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-000083
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.573261ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 kubectl -- --context functional-000083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-000083 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 19:40:19.694190  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:19.700319  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:19.710602  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:19.730910  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:19.771360  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:19.851740  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:20.012257  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:20.332667  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:20.973699  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:22.254888  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:24.816700  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:40:29.936905  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-000083 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.746534486s)
functional_test.go:757: restart took 39.7466977s for "functional-000083" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-000083 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 logs: (1.136829635s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 logs --file /tmp/TestFunctionalserialLogsFileCmd1296202883/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 logs --file /tmp/TestFunctionalserialLogsFileCmd1296202883/001/logs.txt: (1.150662343s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-000083 apply -f testdata/invalidsvc.yaml
E1002 19:40:40.177992  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-000083
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-000083: exit status 115 (291.493008ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.171:32102 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-000083 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-000083 delete -f testdata/invalidsvc.yaml: (1.668980774s)
--- PASS: TestFunctional/serial/InvalidService (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 config get cpus: exit status 14 (49.512059ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 config get cpus: exit status 14 (41.609131ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-000083 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-000083 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 403552: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-000083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (131.500832ms)

                                                
                                                
-- stdout --
	* [functional-000083] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:40:47.734816  403406 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:40:47.734936  403406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:40:47.734945  403406 out.go:309] Setting ErrFile to fd 2...
	I1002 19:40:47.734949  403406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:40:47.735104  403406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:40:47.735633  403406 out.go:303] Setting JSON to false
	I1002 19:40:47.736578  403406 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8599,"bootTime":1696267049,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:40:47.736639  403406 start.go:138] virtualization: kvm guest
	I1002 19:40:47.738572  403406 out.go:177] * [functional-000083] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:40:47.740330  403406 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 19:40:47.740296  403406 notify.go:220] Checking for updates...
	I1002 19:40:47.741953  403406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:40:47.743489  403406 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:40:47.745394  403406 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:40:47.746916  403406 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:40:47.748468  403406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:40:47.750263  403406 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:40:47.750688  403406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:40:47.750729  403406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:40:47.766074  403406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I1002 19:40:47.766457  403406 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:40:47.766961  403406 main.go:141] libmachine: Using API Version  1
	I1002 19:40:47.766984  403406 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:40:47.767347  403406 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:40:47.767547  403406 main.go:141] libmachine: (functional-000083) Calling .DriverName
	I1002 19:40:47.767793  403406 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:40:47.768077  403406 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:40:47.768120  403406 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:40:47.783108  403406 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I1002 19:40:47.783545  403406 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:40:47.783990  403406 main.go:141] libmachine: Using API Version  1
	I1002 19:40:47.784012  403406 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:40:47.784336  403406 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:40:47.784532  403406 main.go:141] libmachine: (functional-000083) Calling .DriverName
	I1002 19:40:47.817213  403406 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 19:40:47.818831  403406 start.go:298] selected driver: kvm2
	I1002 19:40:47.818847  403406 start.go:902] validating driver "kvm2" against &{Name:functional-000083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-000083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.171 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:40:47.818998  403406 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:40:47.821299  403406 out.go:177] 
	W1002 19:40:47.822857  403406 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 19:40:47.824241  403406 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-000083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-000083 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (143.03399ms)

                                                
                                                
-- stdout --
	* [functional-000083] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:40:47.591494  403366 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:40:47.591786  403366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:40:47.591798  403366 out.go:309] Setting ErrFile to fd 2...
	I1002 19:40:47.591803  403366 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:40:47.592085  403366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:40:47.592665  403366 out.go:303] Setting JSON to false
	I1002 19:40:47.593702  403366 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":8599,"bootTime":1696267049,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:40:47.593768  403366 start.go:138] virtualization: kvm guest
	I1002 19:40:47.596116  403366 out.go:177] * [functional-000083] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1002 19:40:47.597599  403366 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 19:40:47.599022  403366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:40:47.597606  403366 notify.go:220] Checking for updates...
	I1002 19:40:47.600671  403366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	I1002 19:40:47.602213  403366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	I1002 19:40:47.603469  403366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:40:47.604756  403366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:40:47.606651  403366 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:40:47.607103  403366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:40:47.607161  403366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:40:47.622833  403366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I1002 19:40:47.623294  403366 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:40:47.623934  403366 main.go:141] libmachine: Using API Version  1
	I1002 19:40:47.623963  403366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:40:47.624336  403366 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:40:47.624559  403366 main.go:141] libmachine: (functional-000083) Calling .DriverName
	I1002 19:40:47.624876  403366 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:40:47.625289  403366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:40:47.625341  403366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:40:47.640866  403366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I1002 19:40:47.641268  403366 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:40:47.641761  403366 main.go:141] libmachine: Using API Version  1
	I1002 19:40:47.641786  403366 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:40:47.642215  403366 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:40:47.642382  403366 main.go:141] libmachine: (functional-000083) Calling .DriverName
	I1002 19:40:47.684012  403366 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 19:40:47.685521  403366 start.go:298] selected driver: kvm2
	I1002 19:40:47.685541  403366 start.go:902] validating driver "kvm2" against &{Name:functional-000083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-000083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.171 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:40:47.685666  403366 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:40:47.688384  403366 out.go:177] 
	W1002 19:40:47.690080  403366 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 19:40:47.691491  403366 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-000083 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-000083 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-9ccsk" [d285b5c1-1c29-4a17-bc75-89198e3759b4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1002 19:41:00.658691  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
helpers_test.go:344: "hello-node-connect-55497b8b78-9ccsk" [d285b5c1-1c29-4a17-bc75-89198e3759b4] Running
2023/10/02 19:41:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.021024366s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.171:30672
functional_test.go:1674: http://192.168.39.171:30672: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-9ccsk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.171:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.171:30672
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (57.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b794d5ff-8f88-479e-b939-ad6e1a507278] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.034194065s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-000083 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-000083 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000083 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-000083 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [acf3ab13-18bd-4920-9744-556ecfaff446] Pending
helpers_test.go:344: "sp-pod" [acf3ab13-18bd-4920-9744-556ecfaff446] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [acf3ab13-18bd-4920-9744-556ecfaff446] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.12764378s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-000083 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-000083 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-000083 delete -f testdata/storage-provisioner/pod.yaml: (1.683477733s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-000083 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [99697878-d061-49b2-aa75-b078f7250b0b] Pending
helpers_test.go:344: "sp-pod" [99697878-d061-49b2-aa75-b078f7250b0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [99697878-d061-49b2-aa75-b078f7250b0b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.028615256s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-000083 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (57.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh -n functional-000083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 cp functional-000083:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3043913122/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh -n functional-000083 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (38.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-000083 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-xpksk" [37a85175-ffb5-46d1-9664-4bac2b98a2d3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-xpksk" [37a85175-ffb5-46d1-9664-4bac2b98a2d3] Running
E1002 19:41:41.619513  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.020293708s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-000083 exec mysql-859648c796-xpksk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-000083 exec mysql-859648c796-xpksk -- mysql -ppassword -e "show databases;": exit status 1 (184.821293ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-000083 exec mysql-859648c796-xpksk -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-000083 exec mysql-859648c796-xpksk -- mysql -ppassword -e "show databases;": exit status 1 (144.814922ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-000083 exec mysql-859648c796-xpksk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (38.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/397995/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /etc/test/nested/copy/397995/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/397995.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /etc/ssl/certs/397995.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/397995.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /usr/share/ca-certificates/397995.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3979952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /etc/ssl/certs/3979952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3979952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /usr/share/ca-certificates/3979952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-000083 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh "sudo systemctl is-active crio": exit status 1 (211.645691ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-000083 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-000083 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-6wkww" [bc7cd33f-656d-42b3-8811-6f2d77cf592c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-6wkww" [bc7cd33f-656d-42b3-8811-6f2d77cf592c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.021829591s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "253.555547ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "49.134094ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "262.969468ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "43.394651ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdany-port2665725968/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696275646417155246" to /tmp/TestFunctionalparallelMountCmdany-port2665725968/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696275646417155246" to /tmp/TestFunctionalparallelMountCmdany-port2665725968/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696275646417155246" to /tmp/TestFunctionalparallelMountCmdany-port2665725968/001/test-1696275646417155246
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.776935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 19:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 19:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 19:40 test-1696275646417155246
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh cat /mount-9p/test-1696275646417155246
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-000083 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [01121895-d00e-4c4c-af89-9bed6d184997] Pending
helpers_test.go:344: "busybox-mount" [01121895-d00e-4c4c-af89-9bed6d184997] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [01121895-d00e-4c4c-af89-9bed6d184997] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [01121895-d00e-4c4c-af89-9bed6d184997] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.018219284s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-000083 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdany-port2665725968/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdspecific-port3703106928/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.757874ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdspecific-port3703106928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh "sudo umount -f /mount-9p": exit status 1 (212.171204ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-000083 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdspecific-port3703106928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T" /mount1: exit status 1 (266.201299ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-000083 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-000083 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3846243532/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service list -o json
functional_test.go:1493: Took "480.060916ms" to run "out/minikube-linux-amd64 -p functional-000083 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.171:31393
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.171:31393
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-000083 docker-env) && out/minikube-linux-amd64 status -p functional-000083"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-000083 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000083 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-000083
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-000083
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000083 image ls --format short --alsologtostderr:
I1002 19:41:22.473975  405330 out.go:296] Setting OutFile to fd 1 ...
I1002 19:41:22.474117  405330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.474128  405330 out.go:309] Setting ErrFile to fd 2...
I1002 19:41:22.474135  405330 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.474437  405330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:41:22.475210  405330 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.475359  405330 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.475957  405330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.476038  405330 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.491234  405330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
I1002 19:41:22.491656  405330 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.492371  405330 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.492404  405330 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.492796  405330 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.493138  405330 main.go:141] libmachine: (functional-000083) Calling .GetState
I1002 19:41:22.495326  405330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.495375  405330 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.510780  405330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38119
I1002 19:41:22.511212  405330 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.511765  405330 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.511795  405330 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.512133  405330 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.512420  405330 main.go:141] libmachine: (functional-000083) Calling .DriverName
I1002 19:41:22.512717  405330 ssh_runner.go:195] Run: systemctl --version
I1002 19:41:22.512747  405330 main.go:141] libmachine: (functional-000083) Calling .GetSSHHostname
I1002 19:41:22.515639  405330 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.516105  405330 main.go:141] libmachine: (functional-000083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:e3:1c", ip: ""} in network mk-functional-000083: {Iface:virbr1 ExpiryTime:2023-10-02 20:38:18 +0000 UTC Type:0 Mac:52:54:00:d3:e3:1c Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:functional-000083 Clientid:01:52:54:00:d3:e3:1c}
I1002 19:41:22.516179  405330 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined IP address 192.168.39.171 and MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.516263  405330 main.go:141] libmachine: (functional-000083) Calling .GetSSHPort
I1002 19:41:22.516452  405330 main.go:141] libmachine: (functional-000083) Calling .GetSSHKeyPath
I1002 19:41:22.516615  405330 main.go:141] libmachine: (functional-000083) Calling .GetSSHUsername
I1002 19:41:22.516741  405330 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/functional-000083/id_rsa Username:docker}
I1002 19:41:22.634956  405330 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 19:41:22.698236  405330 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.698253  405330 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.698552  405330 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.698582  405330 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:22.698593  405330 main.go:141] libmachine: (functional-000083) DBG | Closing plugin on server side
I1002 19:41:22.698604  405330 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.698619  405330 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.698888  405330 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.698905  405330 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000083 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-000083 | af2f11568d983 | 30B    |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/google-containers/addon-resizer      | functional-000083 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000083 image ls --format table --alsologtostderr:
I1002 19:41:23.038855  405439 out.go:296] Setting OutFile to fd 1 ...
I1002 19:41:23.039048  405439 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:23.039061  405439 out.go:309] Setting ErrFile to fd 2...
I1002 19:41:23.039068  405439 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:23.039403  405439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:41:23.040363  405439 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:23.040552  405439 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:23.041172  405439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:23.041255  405439 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:23.056331  405439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
I1002 19:41:23.056891  405439 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:23.057612  405439 main.go:141] libmachine: Using API Version  1
I1002 19:41:23.057640  405439 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:23.058014  405439 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:23.058367  405439 main.go:141] libmachine: (functional-000083) Calling .GetState
I1002 19:41:23.060489  405439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:23.060551  405439 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:23.075368  405439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
I1002 19:41:23.075888  405439 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:23.076429  405439 main.go:141] libmachine: Using API Version  1
I1002 19:41:23.076460  405439 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:23.076794  405439 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:23.077023  405439 main.go:141] libmachine: (functional-000083) Calling .DriverName
I1002 19:41:23.077250  405439 ssh_runner.go:195] Run: systemctl --version
I1002 19:41:23.077300  405439 main.go:141] libmachine: (functional-000083) Calling .GetSSHHostname
I1002 19:41:23.080031  405439 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:23.080488  405439 main.go:141] libmachine: (functional-000083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:e3:1c", ip: ""} in network mk-functional-000083: {Iface:virbr1 ExpiryTime:2023-10-02 20:38:18 +0000 UTC Type:0 Mac:52:54:00:d3:e3:1c Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:functional-000083 Clientid:01:52:54:00:d3:e3:1c}
I1002 19:41:23.080532  405439 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined IP address 192.168.39.171 and MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:23.080693  405439 main.go:141] libmachine: (functional-000083) Calling .GetSSHPort
I1002 19:41:23.080890  405439 main.go:141] libmachine: (functional-000083) Calling .GetSSHKeyPath
I1002 19:41:23.081054  405439 main.go:141] libmachine: (functional-000083) Calling .GetSSHUsername
I1002 19:41:23.081204  405439 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/functional-000083/id_rsa Username:docker}
I1002 19:41:23.185822  405439 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 19:41:23.262136  405439 main.go:141] libmachine: Making call to close driver server
I1002 19:41:23.262158  405439 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:23.262548  405439 main.go:141] libmachine: (functional-000083) DBG | Closing plugin on server side
I1002 19:41:23.262577  405439 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:23.262594  405439 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:23.262612  405439 main.go:141] libmachine: Making call to close driver server
I1002 19:41:23.262626  405439 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:23.262850  405439 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:23.262889  405439 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:23.262929  405439 main.go:141] libmachine: (functional-000083) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000083 image ls --format json --alsologtostderr:
[{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"af2f11568d983d673f47d4d9ccc82b
16eb35311bce235ccf04d7c2a9e9471fc3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-000083"],"size":"30"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-000083"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0b
b0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000083 image ls --format json --alsologtostderr:
I1002 19:41:22.760724  405384 out.go:296] Setting OutFile to fd 1 ...
I1002 19:41:22.760878  405384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.760890  405384 out.go:309] Setting ErrFile to fd 2...
I1002 19:41:22.760895  405384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.761186  405384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:41:22.762069  405384 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.762208  405384 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.762768  405384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.763062  405384 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.781306  405384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
I1002 19:41:22.782062  405384 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.782781  405384 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.782819  405384 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.783194  405384 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.783434  405384 main.go:141] libmachine: (functional-000083) Calling .GetState
I1002 19:41:22.785669  405384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.785753  405384 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.801618  405384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
I1002 19:41:22.802455  405384 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.803155  405384 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.803181  405384 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.803720  405384 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.803935  405384 main.go:141] libmachine: (functional-000083) Calling .DriverName
I1002 19:41:22.804190  405384 ssh_runner.go:195] Run: systemctl --version
I1002 19:41:22.804225  405384 main.go:141] libmachine: (functional-000083) Calling .GetSSHHostname
I1002 19:41:22.807292  405384 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.807769  405384 main.go:141] libmachine: (functional-000083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:e3:1c", ip: ""} in network mk-functional-000083: {Iface:virbr1 ExpiryTime:2023-10-02 20:38:18 +0000 UTC Type:0 Mac:52:54:00:d3:e3:1c Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:functional-000083 Clientid:01:52:54:00:d3:e3:1c}
I1002 19:41:22.807801  405384 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined IP address 192.168.39.171 and MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.808384  405384 main.go:141] libmachine: (functional-000083) Calling .GetSSHPort
I1002 19:41:22.808581  405384 main.go:141] libmachine: (functional-000083) Calling .GetSSHKeyPath
I1002 19:41:22.808770  405384 main.go:141] libmachine: (functional-000083) Calling .GetSSHUsername
I1002 19:41:22.808977  405384 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/functional-000083/id_rsa Username:docker}
I1002 19:41:22.919507  405384 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 19:41:22.977746  405384 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.977764  405384 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.978112  405384 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.978140  405384 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:22.978151  405384 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.978161  405384 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.978414  405384 main.go:141] libmachine: (functional-000083) DBG | Closing plugin on server side
I1002 19:41:22.978451  405384 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.978460  405384 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000083 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-000083
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: af2f11568d983d673f47d4d9ccc82b16eb35311bce235ccf04d7c2a9e9471fc3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-000083
size: "30"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000083 image ls --format yaml --alsologtostderr:
I1002 19:41:22.468311  405331 out.go:296] Setting OutFile to fd 1 ...
I1002 19:41:22.468597  405331 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.468608  405331 out.go:309] Setting ErrFile to fd 2...
I1002 19:41:22.468616  405331 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.468800  405331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:41:22.469403  405331 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.469529  405331 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.469949  405331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.470016  405331 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.486739  405331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
I1002 19:41:22.487325  405331 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.488018  405331 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.488043  405331 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.488581  405331 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.488788  405331 main.go:141] libmachine: (functional-000083) Calling .GetState
I1002 19:41:22.491194  405331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.491259  405331 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.506705  405331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
I1002 19:41:22.507267  405331 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:22.507797  405331 main.go:141] libmachine: Using API Version  1
I1002 19:41:22.507821  405331 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:22.508219  405331 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:22.508455  405331 main.go:141] libmachine: (functional-000083) Calling .DriverName
I1002 19:41:22.508736  405331 ssh_runner.go:195] Run: systemctl --version
I1002 19:41:22.508774  405331 main.go:141] libmachine: (functional-000083) Calling .GetSSHHostname
I1002 19:41:22.512097  405331 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.512607  405331 main.go:141] libmachine: (functional-000083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:e3:1c", ip: ""} in network mk-functional-000083: {Iface:virbr1 ExpiryTime:2023-10-02 20:38:18 +0000 UTC Type:0 Mac:52:54:00:d3:e3:1c Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:functional-000083 Clientid:01:52:54:00:d3:e3:1c}
I1002 19:41:22.512650  405331 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined IP address 192.168.39.171 and MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:22.512765  405331 main.go:141] libmachine: (functional-000083) Calling .GetSSHPort
I1002 19:41:22.513014  405331 main.go:141] libmachine: (functional-000083) Calling .GetSSHKeyPath
I1002 19:41:22.513195  405331 main.go:141] libmachine: (functional-000083) Calling .GetSSHUsername
I1002 19:41:22.513400  405331 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/functional-000083/id_rsa Username:docker}
I1002 19:41:22.612537  405331 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 19:41:22.681933  405331 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.681948  405331 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.682308  405331 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.682332  405331 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:22.682354  405331 main.go:141] libmachine: Making call to close driver server
I1002 19:41:22.682374  405331 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:22.682711  405331 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:22.682729  405331 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-000083 ssh pgrep buildkitd: exit status 1 (252.759109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image build -t localhost/my-image:functional-000083 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image build -t localhost/my-image:functional-000083 testdata/build --alsologtostderr: (2.835647869s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-000083 image build -t localhost/my-image:functional-000083 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d5d04c2bb499
Removing intermediate container d5d04c2bb499
---> 973501e34f8c
Step 3/3 : ADD content.txt /
---> 9a34c901adb4
Successfully built 9a34c901adb4
Successfully tagged localhost/my-image:functional-000083
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-000083 image build -t localhost/my-image:functional-000083 testdata/build --alsologtostderr:
I1002 19:41:22.981453  405426 out.go:296] Setting OutFile to fd 1 ...
I1002 19:41:22.981630  405426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.981646  405426 out.go:309] Setting ErrFile to fd 2...
I1002 19:41:22.981656  405426 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 19:41:22.982070  405426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
I1002 19:41:22.983051  405426 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.983601  405426 config.go:182] Loaded profile config "functional-000083": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 19:41:22.983964  405426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:22.984016  405426 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:22.999117  405426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
I1002 19:41:22.999662  405426 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:23.000264  405426 main.go:141] libmachine: Using API Version  1
I1002 19:41:23.000288  405426 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:23.000724  405426 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:23.000893  405426 main.go:141] libmachine: (functional-000083) Calling .GetState
I1002 19:41:23.002782  405426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 19:41:23.002836  405426 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 19:41:23.019348  405426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36177
I1002 19:41:23.020007  405426 main.go:141] libmachine: () Calling .GetVersion
I1002 19:41:23.020474  405426 main.go:141] libmachine: Using API Version  1
I1002 19:41:23.020501  405426 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 19:41:23.021058  405426 main.go:141] libmachine: () Calling .GetMachineName
I1002 19:41:23.021252  405426 main.go:141] libmachine: (functional-000083) Calling .DriverName
I1002 19:41:23.021521  405426 ssh_runner.go:195] Run: systemctl --version
I1002 19:41:23.021549  405426 main.go:141] libmachine: (functional-000083) Calling .GetSSHHostname
I1002 19:41:23.024292  405426 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:23.024718  405426 main.go:141] libmachine: (functional-000083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:e3:1c", ip: ""} in network mk-functional-000083: {Iface:virbr1 ExpiryTime:2023-10-02 20:38:18 +0000 UTC Type:0 Mac:52:54:00:d3:e3:1c Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:functional-000083 Clientid:01:52:54:00:d3:e3:1c}
I1002 19:41:23.024745  405426 main.go:141] libmachine: (functional-000083) DBG | domain functional-000083 has defined IP address 192.168.39.171 and MAC address 52:54:00:d3:e3:1c in network mk-functional-000083
I1002 19:41:23.024895  405426 main.go:141] libmachine: (functional-000083) Calling .GetSSHPort
I1002 19:41:23.025081  405426 main.go:141] libmachine: (functional-000083) Calling .GetSSHKeyPath
I1002 19:41:23.025234  405426 main.go:141] libmachine: (functional-000083) Calling .GetSSHUsername
I1002 19:41:23.025374  405426 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/functional-000083/id_rsa Username:docker}
I1002 19:41:23.163380  405426 build_images.go:151] Building image from path: /tmp/build.2096633371.tar
I1002 19:41:23.163466  405426 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 19:41:23.174893  405426 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2096633371.tar
I1002 19:41:23.182415  405426 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2096633371.tar: stat -c "%s %y" /var/lib/minikube/build/build.2096633371.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2096633371.tar': No such file or directory
I1002 19:41:23.182470  405426 ssh_runner.go:362] scp /tmp/build.2096633371.tar --> /var/lib/minikube/build/build.2096633371.tar (3072 bytes)
I1002 19:41:23.259969  405426 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2096633371
I1002 19:41:23.276570  405426 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2096633371 -xf /var/lib/minikube/build/build.2096633371.tar
I1002 19:41:23.290513  405426 docker.go:340] Building image: /var/lib/minikube/build/build.2096633371
I1002 19:41:23.290585  405426 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-000083 /var/lib/minikube/build/build.2096633371
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1002 19:41:25.749019  405426 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-000083 /var/lib/minikube/build/build.2096633371: (2.458400938s)
I1002 19:41:25.749098  405426 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2096633371
I1002 19:41:25.758439  405426 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2096633371.tar
I1002 19:41:25.768888  405426 build_images.go:207] Built localhost/my-image:functional-000083 from /tmp/build.2096633371.tar
I1002 19:41:25.768930  405426 build_images.go:123] succeeded building to: functional-000083
I1002 19:41:25.768936  405426 build_images.go:124] failed building to: 
I1002 19:41:25.768968  405426 main.go:141] libmachine: Making call to close driver server
I1002 19:41:25.768986  405426 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:25.769325  405426 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:25.769351  405426 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 19:41:25.769364  405426 main.go:141] libmachine: Making call to close driver server
I1002 19:41:25.769375  405426 main.go:141] libmachine: (functional-000083) Calling .Close
I1002 19:41:25.769631  405426 main.go:141] libmachine: Successfully made call to close driver server
I1002 19:41:25.769647  405426 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.24978478s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-000083
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr: (4.066644421s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr: (2.487974845s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.179924933s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-000083
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image load --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr: (3.665839224s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image save gcr.io/google-containers/addon-resizer:functional-000083 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image save gcr.io/google-containers/addon-resizer:functional-000083 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.382398659s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image rm gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.009778322s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-000083
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-000083 image save --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-000083 image save --daemon gcr.io/google-containers/addon-resizer:functional-000083 --alsologtostderr: (1.368445745s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-000083
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-000083
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-000083
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-000083
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (292.54s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-297880 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-297880 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m30.986939624s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-297880 cache add gcr.io/k8s-minikube/gvisor-addon:2
E1002 20:16:44.161086  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-297880 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.138226214s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-297880 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-297880 addons enable gvisor: (3.27524375s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [57bf0634-6f09-477b-aa03-f3ffde58071d] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.029366635s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-297880 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c752697d-44e3-4a67-8141-777f146ac76b] Pending
helpers_test.go:344: "nginx-gvisor" [c752697d-44e3-4a67-8141-777f146ac76b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [c752697d-44e3-4a67-8141-777f146ac76b] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.036949017s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-297880
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-297880: (1m32.139009004s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-297880 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-297880 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (51.304659372s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [57bf0634-6f09-477b-aa03-f3ffde58071d] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.386625126s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [c752697d-44e3-4a67-8141-777f146ac76b] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.012833302s
helpers_test.go:175: Cleaning up "gvisor-297880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-297880
E1002 20:19:47.100630  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-297880: (1.928209715s)
--- PASS: TestGvisorAddon (292.54s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-815330 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-815330 --driver=kvm2 : (52.185797s)
--- PASS: TestImageBuild/serial/Setup (52.19s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-815330
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-815330: (1.516475614s)
--- PASS: TestImageBuild/serial/NormalBuild (1.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-815330
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-815330: (1.283959967s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-815330
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-815330
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (107.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-851692 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1002 19:43:03.540758  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-851692 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m47.279661313s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (107.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons enable ingress --alsologtostderr -v=5: (14.442283406s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-851692 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-851692 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.551200622s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-851692 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-851692 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2ec5cea0-960a-4d9c-8609-d047970b8aeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2ec5cea0-960a-4d9c-8609-d047970b8aeb] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.018906242s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-851692 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.238
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons disable ingress-dns --alsologtostderr -v=1: (2.803094561s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons disable ingress --alsologtostderr -v=1
E1002 19:45:19.691647  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-851692 addons disable ingress --alsologtostderr -v=1: (7.534667279s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-378051 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1002 19:45:45.427946  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.433227  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.443500  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.463803  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.504142  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.584505  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:45.744993  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:46.065609  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:46.706006  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:47.381567  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:45:47.986285  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:50.546671  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:45:55.667615  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:46:05.908600  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:46:26.389523  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-378051 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m7.26736133s)
--- PASS: TestJSONOutput/start/Command (67.27s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-378051 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-378051 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-378051 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-378051 --output=json --user=testUser: (8.0839454s)
--- PASS: TestJSONOutput/stop/Command (8.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-543279 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-543279 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.424149ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"20a6b4a1-a4d3-4359-ad94-1cdb126499be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-543279] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"535bdccd-8c2d-46fa-8e73-c36353ae6e8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17323"}}
	{"specversion":"1.0","id":"75cfe345-d119-4027-b52d-7e42843b9860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"89185ea9-7fdc-4cb7-ad06-1c93e3e7a42e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig"}}
	{"specversion":"1.0","id":"dc051e56-d4bc-442a-b0a4-75362f7ac6c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube"}}
	{"specversion":"1.0","id":"0dec3718-d198-46c3-bf88-cbc7de5d3c29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"745aa398-a675-48a2-a59c-3306dad6174c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"588dc833-393a-46e5-a27e-97a687e5f61c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-543279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-543279
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (105.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-554233 --driver=kvm2 
E1002 19:47:07.349792  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-554233 --driver=kvm2 : (51.287784026s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-557014 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-557014 --driver=kvm2 : (51.599179999s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-554233
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-557014
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-557014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-557014
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-557014: (1.008327203s)
helpers_test.go:175: Cleaning up "first-554233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-554233
--- PASS: TestMinikubeProfile (105.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-839691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1002 19:48:29.271653  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-839691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.112603157s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-839691 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-839691 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-856567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-856567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.773678263s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-839691 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-856567
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-856567: (2.087982915s)
--- PASS: TestMountStart/serial/Stop (2.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-856567
E1002 19:49:47.100656  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.105998  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.116679  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.136968  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.177253  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.257632  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.418125  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:47.738727  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:48.379744  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:49.660317  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:49:52.220687  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-856567: (25.793452862s)
--- PASS: TestMountStart/serial/RestartStopped (26.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-856567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-058614 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1002 19:50:07.581226  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:50:19.691903  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 19:50:28.061780  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:50:45.425203  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:51:09.022976  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:51:13.111895  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-058614 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m5.698996906s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-058614 -- rollout status deployment/busybox: (2.362176648s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-dxdvv -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-kvr6v -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-dxdvv -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-kvr6v -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-dxdvv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-kvr6v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-dxdvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-dxdvv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-kvr6v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-058614 -- exec busybox-5bc68d56bd-kvr6v -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-058614 -v 3 --alsologtostderr
E1002 19:52:30.943596  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-058614 -v 3 --alsologtostderr: (46.553164405s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp testdata/cp-test.txt multinode-058614:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174959036/001/cp-test_multinode-058614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614:/home/docker/cp-test.txt multinode-058614-m02:/home/docker/cp-test_multinode-058614_multinode-058614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test_multinode-058614_multinode-058614-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614:/home/docker/cp-test.txt multinode-058614-m03:/home/docker/cp-test_multinode-058614_multinode-058614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test_multinode-058614_multinode-058614-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp testdata/cp-test.txt multinode-058614-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174959036/001/cp-test_multinode-058614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt multinode-058614:/home/docker/cp-test_multinode-058614-m02_multinode-058614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test_multinode-058614-m02_multinode-058614.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m02:/home/docker/cp-test.txt multinode-058614-m03:/home/docker/cp-test_multinode-058614-m02_multinode-058614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test_multinode-058614-m02_multinode-058614-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp testdata/cp-test.txt multinode-058614-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174959036/001/cp-test_multinode-058614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt multinode-058614:/home/docker/cp-test_multinode-058614-m03_multinode-058614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614 "sudo cat /home/docker/cp-test_multinode-058614-m03_multinode-058614.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 cp multinode-058614-m03:/home/docker/cp-test.txt multinode-058614-m02:/home/docker/cp-test_multinode-058614-m03_multinode-058614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 ssh -n multinode-058614-m02 "sudo cat /home/docker/cp-test_multinode-058614-m03_multinode-058614-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-058614 node stop m03: (3.079259073s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 status: exit status 7 (446.220458ms)

                                                
                                                
-- stdout --
	multinode-058614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-058614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-058614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr: exit status 7 (425.545574ms)

                                                
                                                
-- stdout --
	multinode-058614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-058614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-058614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:53:07.304615  412489 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:53:07.304787  412489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:53:07.304802  412489 out.go:309] Setting ErrFile to fd 2...
	I1002 19:53:07.304809  412489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:53:07.304989  412489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:53:07.305149  412489 out.go:303] Setting JSON to false
	I1002 19:53:07.305196  412489 mustload.go:65] Loading cluster: multinode-058614
	I1002 19:53:07.305322  412489 notify.go:220] Checking for updates...
	I1002 19:53:07.305592  412489 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:53:07.305605  412489 status.go:255] checking status of multinode-058614 ...
	I1002 19:53:07.305945  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.306012  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.325732  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
	I1002 19:53:07.326164  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.326806  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.326844  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.327184  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.327385  412489 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:53:07.328966  412489 status.go:330] multinode-058614 host status = "Running" (err=<nil>)
	I1002 19:53:07.328982  412489 host.go:66] Checking if "multinode-058614" exists ...
	I1002 19:53:07.329248  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.329275  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.343886  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37207
	I1002 19:53:07.344278  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.344831  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.344867  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.345173  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.346000  412489 main.go:141] libmachine: (multinode-058614) Calling .GetIP
	I1002 19:53:07.348899  412489 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:53:07.349353  412489 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:53:07.349382  412489 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:53:07.349488  412489 host.go:66] Checking if "multinode-058614" exists ...
	I1002 19:53:07.349955  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.350028  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.364741  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I1002 19:53:07.365120  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.365549  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.365564  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.365872  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.366047  412489 main.go:141] libmachine: (multinode-058614) Calling .DriverName
	I1002 19:53:07.366310  412489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 19:53:07.366346  412489 main.go:141] libmachine: (multinode-058614) Calling .GetSSHHostname
	I1002 19:53:07.369026  412489 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:53:07.369487  412489 main.go:141] libmachine: (multinode-058614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:90:6b", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:50:13 +0000 UTC Type:0 Mac:52:54:00:c7:90:6b Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:multinode-058614 Clientid:01:52:54:00:c7:90:6b}
	I1002 19:53:07.369525  412489 main.go:141] libmachine: (multinode-058614) DBG | domain multinode-058614 has defined IP address 192.168.39.83 and MAC address 52:54:00:c7:90:6b in network mk-multinode-058614
	I1002 19:53:07.369695  412489 main.go:141] libmachine: (multinode-058614) Calling .GetSSHPort
	I1002 19:53:07.369862  412489 main.go:141] libmachine: (multinode-058614) Calling .GetSSHKeyPath
	I1002 19:53:07.369987  412489 main.go:141] libmachine: (multinode-058614) Calling .GetSSHUsername
	I1002 19:53:07.370127  412489 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614/id_rsa Username:docker}
	I1002 19:53:07.458904  412489 ssh_runner.go:195] Run: systemctl --version
	I1002 19:53:07.464530  412489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:53:07.479493  412489 kubeconfig.go:92] found "multinode-058614" server: "https://192.168.39.83:8443"
	I1002 19:53:07.479520  412489 api_server.go:166] Checking apiserver status ...
	I1002 19:53:07.479555  412489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:53:07.491145  412489 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1849/cgroup
	I1002 19:53:07.499128  412489 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/podad6af5517be9484355d3192cf7264036/036c4b8dda6972223763c3d030c618a154f8ec16c571629c347a28f8a6efc046"
	I1002 19:53:07.499186  412489 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podad6af5517be9484355d3192cf7264036/036c4b8dda6972223763c3d030c618a154f8ec16c571629c347a28f8a6efc046/freezer.state
	I1002 19:53:07.509014  412489 api_server.go:204] freezer state: "THAWED"
	I1002 19:53:07.509038  412489 api_server.go:253] Checking apiserver healthz at https://192.168.39.83:8443/healthz ...
	I1002 19:53:07.514309  412489 api_server.go:279] https://192.168.39.83:8443/healthz returned 200:
	ok
	I1002 19:53:07.514332  412489 status.go:421] multinode-058614 apiserver status = Running (err=<nil>)
	I1002 19:53:07.514341  412489 status.go:257] multinode-058614 status: &{Name:multinode-058614 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 19:53:07.514357  412489 status.go:255] checking status of multinode-058614-m02 ...
	I1002 19:53:07.514745  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.514778  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.529980  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1002 19:53:07.530437  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.530917  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.530939  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.531272  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.531493  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetState
	I1002 19:53:07.533155  412489 status.go:330] multinode-058614-m02 host status = "Running" (err=<nil>)
	I1002 19:53:07.533172  412489 host.go:66] Checking if "multinode-058614-m02" exists ...
	I1002 19:53:07.533541  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.533569  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.548311  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I1002 19:53:07.548705  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.549261  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.549280  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.549616  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.549811  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetIP
	I1002 19:53:07.552754  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:53:07.553270  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:53:07.553315  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:53:07.553445  412489 host.go:66] Checking if "multinode-058614-m02" exists ...
	I1002 19:53:07.553857  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.553907  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.568489  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I1002 19:53:07.568893  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.569432  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.569482  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.569820  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.570020  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .DriverName
	I1002 19:53:07.570219  412489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 19:53:07.570242  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHHostname
	I1002 19:53:07.572766  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:53:07.573179  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:71:7c", ip: ""} in network mk-multinode-058614: {Iface:virbr1 ExpiryTime:2023-10-02 20:51:29 +0000 UTC Type:0 Mac:52:54:00:fb:71:7c Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-058614-m02 Clientid:01:52:54:00:fb:71:7c}
	I1002 19:53:07.573224  412489 main.go:141] libmachine: (multinode-058614-m02) DBG | domain multinode-058614-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:fb:71:7c in network mk-multinode-058614
	I1002 19:53:07.573387  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHPort
	I1002 19:53:07.573559  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHKeyPath
	I1002 19:53:07.573745  412489 main.go:141] libmachine: (multinode-058614-m02) Calling .GetSSHUsername
	I1002 19:53:07.573914  412489 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17323-390762/.minikube/machines/multinode-058614-m02/id_rsa Username:docker}
	I1002 19:53:07.658438  412489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:53:07.671140  412489 status.go:257] multinode-058614-m02 status: &{Name:multinode-058614-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 19:53:07.671192  412489 status.go:255] checking status of multinode-058614-m03 ...
	I1002 19:53:07.671611  412489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:53:07.671671  412489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:53:07.687917  412489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I1002 19:53:07.688368  412489 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:53:07.688905  412489 main.go:141] libmachine: Using API Version  1
	I1002 19:53:07.688927  412489 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:53:07.689262  412489 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:53:07.689420  412489 main.go:141] libmachine: (multinode-058614-m03) Calling .GetState
	I1002 19:53:07.690942  412489 status.go:330] multinode-058614-m03 host status = "Stopped" (err=<nil>)
	I1002 19:53:07.690957  412489 status.go:343] host is not running, skipping remaining checks
	I1002 19:53:07.690962  412489 status.go:257] multinode-058614-m03 status: &{Name:multinode-058614-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (267.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-058614
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-058614
E1002 19:54:47.100726  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:55:14.786043  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 19:55:19.693604  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-058614: (1m54.847865352s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-058614 --wait=true -v=8 --alsologtostderr
E1002 19:55:45.424971  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
E1002 19:56:42.742347  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-058614 --wait=true -v=8 --alsologtostderr: (2m32.161410463s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-058614
--- PASS: TestMultiNode/serial/RestartKeepsNodes (267.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-058614 node delete m03: (1.191167628s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-058614 stop: (25.445732267s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 status: exit status 7 (79.389971ms)

                                                
                                                
-- stdout --
	multinode-058614
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-058614-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr: exit status 7 (75.708167ms)

                                                
                                                
-- stdout --
	multinode-058614
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-058614-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:58:24.017610  414183 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:58:24.017860  414183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:24.017870  414183 out.go:309] Setting ErrFile to fd 2...
	I1002 19:58:24.017875  414183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:58:24.018042  414183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-390762/.minikube/bin
	I1002 19:58:24.018206  414183 out.go:303] Setting JSON to false
	I1002 19:58:24.018239  414183 mustload.go:65] Loading cluster: multinode-058614
	I1002 19:58:24.018288  414183 notify.go:220] Checking for updates...
	I1002 19:58:24.018585  414183 config.go:182] Loaded profile config "multinode-058614": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:58:24.018601  414183 status.go:255] checking status of multinode-058614 ...
	I1002 19:58:24.019019  414183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:58:24.019087  414183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:24.034718  414183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36511
	I1002 19:58:24.035203  414183 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:24.035787  414183 main.go:141] libmachine: Using API Version  1
	I1002 19:58:24.035809  414183 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:24.036218  414183 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:24.036445  414183 main.go:141] libmachine: (multinode-058614) Calling .GetState
	I1002 19:58:24.038055  414183 status.go:330] multinode-058614 host status = "Stopped" (err=<nil>)
	I1002 19:58:24.038070  414183 status.go:343] host is not running, skipping remaining checks
	I1002 19:58:24.038080  414183 status.go:257] multinode-058614 status: &{Name:multinode-058614 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 19:58:24.038103  414183 status.go:255] checking status of multinode-058614-m02 ...
	I1002 19:58:24.038375  414183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:58:24.038417  414183 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:58:24.052556  414183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36481
	I1002 19:58:24.052965  414183 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:58:24.053351  414183 main.go:141] libmachine: Using API Version  1
	I1002 19:58:24.053394  414183 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:58:24.053680  414183 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:58:24.053864  414183 main.go:141] libmachine: (multinode-058614-m02) Calling .GetState
	I1002 19:58:24.055398  414183 status.go:330] multinode-058614-m02 host status = "Stopped" (err=<nil>)
	I1002 19:58:24.055415  414183 status.go:343] host is not running, skipping remaining checks
	I1002 19:58:24.055423  414183 status.go:257] multinode-058614-m02 status: &{Name:multinode-058614-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-058614 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1002 19:59:47.100037  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:00:19.692080  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-058614 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m56.112005523s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-058614 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-058614
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-058614-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-058614-m02 --driver=kvm2 : exit status 14 (60.2072ms)

                                                
                                                
-- stdout --
	* [multinode-058614-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-058614-m02' is duplicated with machine name 'multinode-058614-m02' in profile 'multinode-058614'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-058614-m03 --driver=kvm2 
E1002 20:00:45.427699  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-058614-m03 --driver=kvm2 : (51.570579063s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-058614
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-058614: exit status 80 (233.129768ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-058614
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-058614-m03 already exists in multinode-058614-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-058614-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.89s)

                                                
                                    
x
+
TestPreload (205.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377104 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1002 20:02:08.473252  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377104 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m4.11838853s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377104 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-377104 image pull gcr.io/k8s-minikube/busybox: (1.262381324s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-377104
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-377104: (13.094949355s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377104 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377104 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m6.045507585s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377104 image list
helpers_test.go:175: Cleaning up "test-preload-377104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-377104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-377104: (1.091058744s)
--- PASS: TestPreload (205.81s)

                                                
                                    
x
+
TestScheduledStopUnix (123.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-971528 --memory=2048 --driver=kvm2 
E1002 20:04:47.100560  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:05:19.691633  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-971528 --memory=2048 --driver=kvm2 : (51.654500648s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-971528 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-971528 -n scheduled-stop-971528
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-971528 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-971528 --cancel-scheduled
E1002 20:05:45.427963  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-971528 -n scheduled-stop-971528
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-971528
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-971528 --schedule 15s
E1002 20:06:10.146355  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-971528
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-971528: exit status 7 (64.3606ms)

                                                
                                                
-- stdout --
	scheduled-stop-971528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-971528 -n scheduled-stop-971528
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-971528 -n scheduled-stop-971528: exit status 7 (58.800654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-971528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-971528
--- PASS: TestScheduledStopUnix (123.25s)

                                                
                                    
x
+
TestSkaffold (145.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3731530400 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-492655 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-492655 --memory=2600 --driver=kvm2 : (53.72681643s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3731530400 run --minikube-profile skaffold-492655 --kube-context skaffold-492655 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3731530400 run --minikube-profile skaffold-492655 --kube-context skaffold-492655 --status-check=true --port-forward=false --interactive=false: (1m19.702078502s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d75967c9f-z7vk9" [451d81d0-bcb0-4007-bc38-b1fff814d8ac] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.021710208s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-c8ccb8d78-m84zg" [5a42e370-d13e-433d-9c6a-f8d50816857e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010703918s
helpers_test.go:175: Cleaning up "skaffold-492655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-492655
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-492655: (1.218267654s)
--- PASS: TestSkaffold (145.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1498164382.exe start -p running-upgrade-365232 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.1498164382.exe start -p running-upgrade-365232 --memory=2200 --vm-driver=kvm2 : exit status 70 (1.723093515s)

                                                
                                                
-- stdout --
	! [running-upgrade-365232] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig4174757511
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 25.45 MiB / 150.93 MiB [-->_________] 16.86% ? p/s ?    > minikube-v1.6.0.iso: 68.19 MiB / 150.93 MiB [----->______] 45.18% ? p/s ?    > minikube-v1.6.0.iso: 111.26 MiB / 150.93 MiB [-------->__] 73.71% ? p/s ?    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 262.13 MiB p/s 1s* 
	X Failed to cache ISO: rename /home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/minikube-v1.6.0.iso.download /home/jenkins/minikube-integration/17323-390762/.minikube/cache/iso/minikube-v1.6.0.iso: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1498164382.exe start -p running-upgrade-365232 --memory=2200 --vm-driver=kvm2 
E1002 20:09:47.100110  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:10:19.692308  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1498164382.exe start -p running-upgrade-365232 --memory=2200 --vm-driver=kvm2 : (1m51.618091071s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-365232 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-365232 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m26.941514342s)
helpers_test.go:175: Cleaning up "running-upgrade-365232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-365232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-365232: (1.359154663s)
--- PASS: TestRunningBinaryUpgrade (203.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (224.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m20.934010165s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-106345
E1002 20:10:45.424885  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-106345: (13.443161334s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-106345 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-106345 status --format={{.Host}}: exit status 7 (69.352302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (45.736203881s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-106345 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (84.756331ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-106345] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-106345
	    minikube start -p kubernetes-upgrade-106345 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1063452 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-106345 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-106345 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m22.666638965s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-106345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-106345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-106345: (1.079583002s)
--- PASS: TestKubernetesUpgrade (224.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (213.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.974943893.exe start -p stopped-upgrade-269210 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.974943893.exe start -p stopped-upgrade-269210 --memory=2200 --vm-driver=kvm2 : (1m53.899783356s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.974943893.exe -p stopped-upgrade-269210 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.974943893.exe -p stopped-upgrade-269210 stop: (13.402697011s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-269210 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-269210 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m26.523815789s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (213.83s)

                                                
                                    
x
+
TestPause/serial/Start (79.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-633929 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-633929 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m19.051363103s)
--- PASS: TestPause/serial/Start (79.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-633929 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-633929 --alsologtostderr -v=1 --driver=kvm2 : (47.88419847s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-269210
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-269210: (1.615786131s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-633929 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.57s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-633929 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-633929 --output=json --layout=cluster: exit status 2 (260.89143ms)

                                                
                                                
-- stdout --
	{"Name":"pause-633929","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-633929","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-633929 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-633929 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-633929 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-633929 --alsologtostderr -v=5: (1.884335336s)
--- PASS: TestPause/serial/DeletePaused (1.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (68.144067ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-334357] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-390762/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-390762/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-334357 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-334357 --driver=kvm2 : (1m41.911948714s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-334357 status -o json
E1002 20:15:19.691725  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --driver=kvm2 
E1002 20:15:22.240895  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:15:45.425957  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --driver=kvm2 : (36.5394556s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-334357 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-334357 status -o json: exit status 2 (234.123445ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-334357","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-334357
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-334357: (1.319900284s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-864077 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-864077 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m41.30367387s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (60.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-334357 --no-kubernetes --driver=kvm2 : (1m0.506406117s)
--- PASS: TestNoKubernetes/serial/Start (60.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-334357 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-334357 "sudo systemctl is-active --quiet service kubelet": exit status 1 (217.294788ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-334357
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-334357: (2.205414417s)
--- PASS: TestNoKubernetes/serial/Stop (2.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-334357 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-334357 --driver=kvm2 : (26.884857156s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-334357 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-334357 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.964997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016464 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016464 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (1m58.191968639s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-807615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-807615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (1m15.566987771s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-864077 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea7d0b78-4cdc-496f-ba81-6a62e48189ff] Pending
helpers_test.go:344: "busybox" [ea7d0b78-4cdc-496f-ba81-6a62e48189ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ea7d0b78-4cdc-496f-ba81-6a62e48189ff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.047246947s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-864077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-864077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-864077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-864077 --alsologtostderr -v=3
E1002 20:18:48.473966  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-864077 --alsologtostderr -v=3: (13.329370375s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-864077 -n old-k8s-version-864077
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-864077 -n old-k8s-version-864077: exit status 7 (106.783406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-864077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (464.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-864077 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1002 20:19:00.318077  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-864077 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m44.446606042s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-864077 -n old-k8s-version-864077
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (464.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-807615 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd457f7e-9fd6-48e4-a694-5b8eaa49f7a7] Pending
helpers_test.go:344: "busybox" [fd457f7e-9fd6-48e4-a694-5b8eaa49f7a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd457f7e-9fd6-48e4-a694-5b8eaa49f7a7] Running
E1002 20:19:28.001668  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0363482s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-807615 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-016464 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fbb9efae-8037-4505-bbc9-06b523f1c4f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fbb9efae-8037-4505-bbc9-06b523f1c4f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.030468213s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-016464 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-807615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-807615 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.26104501s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-807615 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-807615 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-807615 --alsologtostderr -v=3: (13.113865527s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-016464 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.291681238s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-016464 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-016464 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-016464 --alsologtostderr -v=3: (13.119660513s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-807615 -n embed-certs-807615
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-807615 -n embed-certs-807615: exit status 7 (61.428288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-807615 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (333.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-807615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-807615 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (5m33.591681079s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-807615 -n embed-certs-807615
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (333.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (1m35.408846047s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016464 -n no-preload-016464
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016464 -n no-preload-016464: exit status 7 (80.447719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-016464 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (381s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-016464 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E1002 20:20:19.691516  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
E1002 20:20:45.425846  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-016464 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (6m20.703564353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-016464 -n no-preload-016464
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (381.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-063235 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [476b9e1e-b5fe-4155-8ef1-2f46ab542590] Pending
helpers_test.go:344: "busybox" [476b9e1e-b5fe-4155-8ef1-2f46ab542590] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [476b9e1e-b5fe-4155-8ef1-2f46ab542590] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.041580933s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-063235 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-063235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-063235 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108471165s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-063235 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-063235 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-063235 --alsologtostderr -v=3: (13.11019461s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235: exit status 7 (70.918734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-063235 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-063235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1002 20:21:53.282050  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.287339  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.297664  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.317939  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.358267  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.438643  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.599543  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:53.920152  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:54.560547  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:55.841521  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:21:58.402377  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:22:03.523576  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:22:13.764224  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:22:34.245391  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:22:50.147366  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:23:15.206400  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:24:00.317837  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
E1002 20:24:37.127481  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
E1002 20:24:47.100465  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:25:19.691962  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-063235 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m30.048938018s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cqptl" [b4247c77-9673-4821-823a-6d850b6c8e36] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cqptl" [b4247c77-9673-4821-823a-6d850b6c8e36] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.025341925s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cqptl" [b4247c77-9673-4821-823a-6d850b6c8e36] Running
E1002 20:25:45.425379  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0130666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-807615 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-807615 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-807615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-807615 -n embed-certs-807615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-807615 -n embed-certs-807615: exit status 2 (288.751107ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-807615 -n embed-certs-807615
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-807615 -n embed-certs-807615: exit status 2 (273.986368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-807615 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-807615 -n embed-certs-807615
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-807615 -n embed-certs-807615
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-418729 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-418729 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m14.679955737s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v746z" [2e50816b-5c55-447c-947c-cebe0cbcc083] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021600704s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v746z" [2e50816b-5c55-447c-947c-cebe0cbcc083] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019047116s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-016464 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-016464 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-016464 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016464 -n no-preload-016464
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016464 -n no-preload-016464: exit status 2 (268.499791ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-016464 -n no-preload-016464
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-016464 -n no-preload-016464: exit status 2 (268.912381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-016464 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-016464 -n no-preload-016464
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-016464 -n no-preload-016464
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (110.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m50.561069666s)
--- PASS: TestNetworkPlugins/group/auto/Start (110.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fg5j5" [43a0d2c3-81a0-40ce-94c6-1e9e36bfc8b6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022615255s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-fg5j5" [43a0d2c3-81a0-40ce-94c6-1e9e36bfc8b6] Running
E1002 20:26:53.282042  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012441957s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-864077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-864077 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-864077 -n old-k8s-version-864077
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-864077 -n old-k8s-version-864077: exit status 2 (294.320329ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-864077 -n old-k8s-version-864077
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-864077 -n old-k8s-version-864077: exit status 2 (276.334747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-864077 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-864077 -n old-k8s-version-864077
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-864077 -n old-k8s-version-864077
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m25.59392879s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-418729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-418729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314959506s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-418729 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-418729 --alsologtostderr -v=3: (13.151908237s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9rlct" [4b0ec233-ec68-4eb1-861e-8ecbbd491b69] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9rlct" [4b0ec233-ec68-4eb1-861e-8ecbbd491b69] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.029702163s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418729 -n newest-cni-418729
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418729 -n newest-cni-418729: exit status 7 (82.452609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-418729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (62.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-418729 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1002 20:27:20.968679  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-418729 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m1.929040504s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-418729 -n newest-cni-418729
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (62.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9rlct" [4b0ec233-ec68-4eb1-861e-8ecbbd491b69] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021804164s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-063235 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-063235 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-063235 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235: exit status 2 (283.468555ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235: exit status 2 (287.05777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-063235 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-063235 -n default-k8s-diff-port-063235
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (133.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m13.704734818s)
--- PASS: TestNetworkPlugins/group/calico/Start (133.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4vnz2" [7eaa4f0a-64c2-47f2-bf29-2eaf4205bbe4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4vnz2" [7eaa4f0a-64c2-47f2-bf29-2eaf4205bbe4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.019540679s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-418729 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-418729 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418729 -n newest-cni-418729
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418729 -n newest-cni-418729: exit status 2 (262.803249ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-418729 -n newest-cni-418729
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-418729 -n newest-cni-418729: exit status 2 (274.957123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-418729 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-418729 -n newest-cni-418729
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-418729 -n newest-cni-418729
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)
E1002 20:31:24.290364  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.295526  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.305993  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.326145  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.366466  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.446845  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.607270  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:24.928115  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:25.568710  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:26.849705  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:29.410801  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:34.531065  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
E1002 20:31:44.771426  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7cfbm" [6fcd103c-0b56-4d07-aa05-56b1615de82a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029138709s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m25.256799593s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4ks5p" [7f5b77dc-6a4f-4263-b962-614b3cb577ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4ks5p" [7f5b77dc-6a4f-4263-b962-614b3cb577ee] Running
E1002 20:28:40.925460  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/old-k8s-version-864077/client.crt: no such file or directory
E1002 20:28:46.046618  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/old-k8s-version-864077/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.015070947s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (119.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1002 20:28:56.287433  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/old-k8s-version-864077/client.crt: no such file or directory
E1002 20:29:00.317836  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m59.280752258s)
--- PASS: TestNetworkPlugins/group/false/Start (119.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1002 20:29:16.768348  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/old-k8s-version-864077/client.crt: no such file or directory
E1002 20:29:28.576141  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.581472  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.592012  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.612320  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.652636  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.733597  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:28.894547  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:29.215068  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:29.856276  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:31.136577  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:33.697365  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:38.817689  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
E1002 20:29:47.100776  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/ingress-addon-legacy-851692/client.crt: no such file or directory
E1002 20:29:49.057969  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m43.232035216s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s4d7q" [a059e904-fb71-4778-a621-6fa29fd26faa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 20:29:57.729177  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/old-k8s-version-864077/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s4d7q" [a059e904-fb71-4778-a621-6fa29fd26faa] Running
E1002 20:30:02.744698  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.017938641s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tdpq6" [bcf551b8-9425-4eeb-ab91-791c3b922159] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.030352753s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-950653 replace --force -f testdata/netcat-deployment.yaml
E1002 20:30:09.538547  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zll2h" [1e77736a-b4d4-4cc7-8a3d-74e02887faec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zll2h" [1e77736a-b4d4-4cc7-8a3d-74e02887faec] Running
E1002 20:30:19.692146  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/addons-169812/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.012565646s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E1002 20:30:23.362162  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/skaffold-492655/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m27.527319302s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1002 20:30:45.425841  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/functional-000083/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m30.268193035s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qqvmc" [e26cb43a-00b5-41bc-966c-f5af99410da9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 20:30:50.499184  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/no-preload-016464/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qqvmc" [e26cb43a-00b5-41bc-966c-f5af99410da9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.011276499s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ht25x" [6d322c6b-809d-4c33-9d33-e48e247f54e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ht25x" [6d322c6b-809d-4c33-9d33-e48e247f54e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.020127999s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (110.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-950653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m50.090470153s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (110.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2wlqx" [eb586efb-26b2-42fb-be24-d78fb88d92ae] Running
E1002 20:31:53.282622  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/gvisor-297880/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023054186s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wxp2p" [b71c5e9e-6404-45e7-b105-54b41b434875] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wxp2p" [b71c5e9e-6404-45e7-b105-54b41b434875] Running
E1002 20:32:05.252528  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.012966451s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tcjn5" [140599a6-dc50-49cd-b6f8-ecde697c17cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tcjn5" [140599a6-dc50-49cd-b6f8-ecde697c17cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.020907013s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-950653 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-950653 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.188784145s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-950653 exec deployment/netcat -- nslookup kubernetes.default
E1002 20:32:46.213753  397995 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-390762/.minikube/profiles/default-k8s-diff-port-063235/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-950653 exec deployment/netcat -- nslookup kubernetes.default: (5.186827826s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-950653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-950653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rxp62" [0d18c6a3-7761-4f47-9afe-58785ccebed9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rxp62" [0d18c6a3-7761-4f47-9afe-58785ccebed9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.011762204s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-950653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-950653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    

Test skip (31/318)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-673689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-673689
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-950653 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-950653" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-950653

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-950653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-950653"

                                                
                                                
----------------------- debugLogs end: cilium-950653 [took: 3.046171929s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-950653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-950653
--- SKIP: TestNetworkPlugins/group/cilium (3.20s)

                                                
                                    
Copied to clipboard