Test Report: KVM_Linux 17866

                    
                      8c6a2e99755a9a0a7d8f4ed404c065becb2fd234:2024-01-08:32612
                    
                

Test fail (1/329)

Order failed test Duration
227 TestMultiNode/serial/StartAfterStop 20.77
x
+
TestMultiNode/serial/StartAfterStop (20.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 node start m03 --alsologtostderr: exit status 90 (18.132368088s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-472593-m03 in cluster multinode-472593
	* Restarting existing kvm2 VM for "multinode-472593-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:10:40.043560  164703 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:10:40.043737  164703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:10:40.043750  164703 out.go:309] Setting ErrFile to fd 2...
	I0108 21:10:40.043756  164703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:10:40.043925  164703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 21:10:40.044197  164703 mustload.go:65] Loading cluster: multinode-472593
	I0108 21:10:40.044571  164703 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:10:40.044925  164703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:40.044963  164703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:40.060819  164703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0108 21:10:40.061285  164703 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:40.061905  164703 main.go:141] libmachine: Using API Version  1
	I0108 21:10:40.061931  164703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:40.062315  164703 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:40.062540  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetState
	W0108 21:10:40.063951  164703 host.go:58] "multinode-472593-m03" host status: Stopped
	I0108 21:10:40.066210  164703 out.go:177] * Starting worker node multinode-472593-m03 in cluster multinode-472593
	I0108 21:10:40.067503  164703 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:10:40.067538  164703 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 21:10:40.067556  164703 cache.go:56] Caching tarball of preloaded images
	I0108 21:10:40.067650  164703 preload.go:174] Found /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:10:40.067663  164703 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:10:40.067817  164703 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:10:40.068037  164703 start.go:365] acquiring machines lock for multinode-472593-m03: {Name:mk82511c12c99b4c49d70e636cfc8467781aa323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:10:40.068100  164703 start.go:369] acquired machines lock for "multinode-472593-m03" in 27.832µs
	I0108 21:10:40.068132  164703 start.go:96] Skipping create...Using existing machine configuration
	I0108 21:10:40.068144  164703 fix.go:54] fixHost starting: m03
	I0108 21:10:40.068425  164703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:40.068456  164703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:40.083936  164703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0108 21:10:40.084342  164703 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:40.084791  164703 main.go:141] libmachine: Using API Version  1
	I0108 21:10:40.084810  164703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:40.085146  164703 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:40.085330  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:40.085475  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetState
	I0108 21:10:40.086877  164703 fix.go:102] recreateIfNeeded on multinode-472593-m03: state=Stopped err=<nil>
	I0108 21:10:40.086895  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	W0108 21:10:40.087037  164703 fix.go:128] unexpected machine state, will restart: <nil>
	I0108 21:10:40.088922  164703 out.go:177] * Restarting existing kvm2 VM for "multinode-472593-m03" ...
	I0108 21:10:40.090167  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .Start
	I0108 21:10:40.090359  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring networks are active...
	I0108 21:10:40.090924  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring network default is active
	I0108 21:10:40.091277  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring network mk-multinode-472593 is active
	I0108 21:10:40.091618  164703 main.go:141] libmachine: (multinode-472593-m03) Getting domain xml...
	I0108 21:10:40.092250  164703 main.go:141] libmachine: (multinode-472593-m03) Creating domain...
	I0108 21:10:41.355085  164703 main.go:141] libmachine: (multinode-472593-m03) Waiting to get IP...
	I0108 21:10:41.356052  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:41.356556  164703 main.go:141] libmachine: (multinode-472593-m03) Found IP for machine: 192.168.39.70
	I0108 21:10:41.356577  164703 main.go:141] libmachine: (multinode-472593-m03) Reserving static IP address...
	I0108 21:10:41.356594  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has current primary IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:41.357078  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "multinode-472593-m03", mac: "52:54:00:96:bc:2d", ip: "192.168.39.70"} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:58 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:41.357121  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | skip adding static IP to network mk-multinode-472593 - found existing host DHCP lease matching {name: "multinode-472593-m03", mac: "52:54:00:96:bc:2d", ip: "192.168.39.70"}
	I0108 21:10:41.357144  164703 main.go:141] libmachine: (multinode-472593-m03) Reserved static IP address: 192.168.39.70
	I0108 21:10:41.357162  164703 main.go:141] libmachine: (multinode-472593-m03) Waiting for SSH to be available...
	I0108 21:10:41.357176  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Getting to WaitForSSH function...
	I0108 21:10:41.359304  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:41.359581  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:58 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:41.359624  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:41.359785  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Using SSH client type: external
	I0108 21:10:41.359826  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa (-rw-------)
	I0108 21:10:41.359865  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:10:41.359887  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | About to run SSH command:
	I0108 21:10:41.359904  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | exit 0
	I0108 21:10:53.457077  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | SSH cmd err, output: <nil>: 
	I0108 21:10:53.457542  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetConfigRaw
	I0108 21:10:53.458197  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
	I0108 21:10:53.460679  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.461072  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.461114  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.461452  164703 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:10:53.461705  164703 machine.go:88] provisioning docker machine ...
	I0108 21:10:53.461730  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:53.461946  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
	I0108 21:10:53.462128  164703 buildroot.go:166] provisioning hostname "multinode-472593-m03"
	I0108 21:10:53.462148  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
	I0108 21:10:53.462305  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:53.464631  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.464971  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.465001  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.465135  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:53.465300  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:53.465451  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:53.465577  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:53.465722  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:53.466082  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:53.466103  164703 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-472593-m03 && echo "multinode-472593-m03" | sudo tee /etc/hostname
	I0108 21:10:53.590342  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-472593-m03
	
	I0108 21:10:53.590377  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:53.593358  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.593784  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.593826  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.593997  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:53.594218  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:53.594470  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:53.594642  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:53.594854  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:53.595280  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:53.595309  164703 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-472593-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-472593-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-472593-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:10:53.718616  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:10:53.718651  164703 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-142784/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-142784/.minikube}
	I0108 21:10:53.718677  164703 buildroot.go:174] setting up certificates
	I0108 21:10:53.718689  164703 provision.go:83] configureAuth start
	I0108 21:10:53.718703  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
	I0108 21:10:53.718990  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
	I0108 21:10:53.721533  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.721941  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.721996  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.722106  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:53.724535  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.724929  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.724967  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.725124  164703 provision.go:138] copyHostCerts
	I0108 21:10:53.725198  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem, removing ...
	I0108 21:10:53.725217  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
	I0108 21:10:53.725297  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem (1679 bytes)
	I0108 21:10:53.725442  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem, removing ...
	I0108 21:10:53.725456  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
	I0108 21:10:53.725492  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem (1078 bytes)
	I0108 21:10:53.725592  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem, removing ...
	I0108 21:10:53.725605  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
	I0108 21:10:53.725639  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem (1123 bytes)
	I0108 21:10:53.725697  164703 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem org=jenkins.multinode-472593-m03 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube multinode-472593-m03]
	I0108 21:10:53.867806  164703 provision.go:172] copyRemoteCerts
	I0108 21:10:53.867869  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:10:53.867895  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:53.870480  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.870783  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:53.870812  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:53.870987  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:53.871179  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:53.871318  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:53.871424  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
	I0108 21:10:53.954704  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:10:53.977224  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:10:53.999695  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:10:54.022652  164703 provision.go:86] duration metric: configureAuth took 303.944851ms
	I0108 21:10:54.022692  164703 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:10:54.022923  164703 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:10:54.022952  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:54.023241  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:54.025848  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.026237  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:54.026264  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.026564  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:54.026758  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.026914  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.027012  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:54.027151  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:54.027528  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:54.027544  164703 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:10:54.139792  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:10:54.139816  164703 buildroot.go:70] root file system type: tmpfs
	I0108 21:10:54.139919  164703 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:10:54.139941  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:54.142808  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.143243  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:54.143278  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.143531  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:54.143756  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.143947  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.144070  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:54.144261  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:54.144734  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:54.144841  164703 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:10:54.266412  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:10:54.266449  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:54.269259  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.269641  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:54.269668  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:54.269878  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:54.270084  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.270261  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:54.270434  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:54.270572  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:54.270877  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:54.270896  164703 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:10:55.103378  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:10:55.103411  164703 machine.go:91] provisioned docker machine in 1.641689016s
	I0108 21:10:55.103426  164703 start.go:300] post-start starting for "multinode-472593-m03" (driver="kvm2")
	I0108 21:10:55.103438  164703 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:10:55.103491  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:55.103799  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:10:55.103828  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:55.106292  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.106699  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:55.106729  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.106846  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:55.107049  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:55.107245  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:55.107383  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
	I0108 21:10:55.191885  164703 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:10:55.196003  164703 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:10:55.196029  164703 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/addons for local assets ...
	I0108 21:10:55.196105  164703 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/files for local assets ...
	I0108 21:10:55.196171  164703 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> 1499882.pem in /etc/ssl/certs
	I0108 21:10:55.196267  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:10:55.205832  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /etc/ssl/certs/1499882.pem (1708 bytes)
	I0108 21:10:55.227720  164703 start.go:303] post-start completed in 124.276108ms
	I0108 21:10:55.227759  164703 fix.go:56] fixHost completed within 15.159615742s
	I0108 21:10:55.227788  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:55.230536  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.230880  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:55.230911  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.231120  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:55.231326  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:55.231502  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:55.231644  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:55.231797  164703 main.go:141] libmachine: Using SSH client type: native
	I0108 21:10:55.232102  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0108 21:10:55.232114  164703 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0108 21:10:55.341849  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748255.293015959
	
	I0108 21:10:55.341873  164703 fix.go:206] guest clock: 1704748255.293015959
	I0108 21:10:55.341882  164703 fix.go:219] Guest: 2024-01-08 21:10:55.293015959 +0000 UTC Remote: 2024-01-08 21:10:55.227763405 +0000 UTC m=+15.232080509 (delta=65.252554ms)
	I0108 21:10:55.341909  164703 fix.go:190] guest clock delta is within tolerance: 65.252554ms
	I0108 21:10:55.341917  164703 start.go:83] releasing machines lock for "multinode-472593-m03", held for 15.273804374s
	I0108 21:10:55.341940  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:55.342188  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
	I0108 21:10:55.344955  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.345269  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:55.345299  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.345436  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:55.345955  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:55.346146  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
	I0108 21:10:55.346246  164703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:10:55.346304  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:55.346357  164703 ssh_runner.go:195] Run: systemctl --version
	I0108 21:10:55.346377  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
	I0108 21:10:55.349127  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.349338  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.349613  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:55.349640  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.349828  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:55.349831  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
	I0108 21:10:55.349899  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
	I0108 21:10:55.349973  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
	I0108 21:10:55.350048  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:55.350172  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:55.350220  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
	I0108 21:10:55.350354  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
	I0108 21:10:55.350351  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
	I0108 21:10:55.350584  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
	I0108 21:10:55.455461  164703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 21:10:55.461134  164703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:10:55.461213  164703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:10:55.478067  164703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:10:55.478114  164703 start.go:475] detecting cgroup driver to use...
	I0108 21:10:55.478256  164703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:10:55.495063  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:10:55.505369  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:10:55.516380  164703 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:10:55.516454  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:10:55.527060  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:10:55.537379  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:10:55.548787  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:10:55.559526  164703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:10:55.570937  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:10:55.581545  164703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:10:55.591388  164703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:10:55.601219  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:10:55.706142  164703 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:10:55.723879  164703 start.go:475] detecting cgroup driver to use...
	I0108 21:10:55.723981  164703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:10:55.740141  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:10:55.756343  164703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:10:55.777285  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:10:55.790398  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:10:55.806197  164703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:10:55.837017  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:10:55.851427  164703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:10:55.868350  164703 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:10:55.872420  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:10:55.882827  164703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:10:55.899435  164703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:10:56.007182  164703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:10:56.122036  164703 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:10:56.122218  164703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:10:56.138395  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:10:56.239040  164703 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:10:57.652869  164703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.413759946s)
	I0108 21:10:57.652959  164703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:10:57.757282  164703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:10:57.873462  164703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:10:57.981530  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:10:58.085232  164703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:10:58.100617  164703 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I0108 21:10:58.113905  164703 out.go:177] 
	W0108 21:10:58.115261  164703 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 21:10:51 UTC, ends at Mon 2024-01-08 21:10:58 UTC. --
	Jan 08 21:10:52 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Jan 08 21:10:52 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Starting CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Listening on CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2024-01-08 21:10:51 UTC, ends at Mon 2024-01-08 21:10:58 UTC. --
	Jan 08 21:10:52 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Jan 08 21:10:52 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Starting CRI Docker Socket for the API.
	Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Listening on CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W0108 21:10:58.115279  164703 out.go:239] * 
	* 
	W0108 21:10:58.117429  164703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0108 21:10:58.118739  164703 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0108 21:10:40.043560  164703 out.go:296] Setting OutFile to fd 1 ...
I0108 21:10:40.043737  164703 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:10:40.043750  164703 out.go:309] Setting ErrFile to fd 2...
I0108 21:10:40.043756  164703 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 21:10:40.043925  164703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 21:10:40.044197  164703 mustload.go:65] Loading cluster: multinode-472593
I0108 21:10:40.044571  164703 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 21:10:40.044925  164703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 21:10:40.044963  164703 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:10:40.060819  164703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
I0108 21:10:40.061285  164703 main.go:141] libmachine: () Calling .GetVersion
I0108 21:10:40.061905  164703 main.go:141] libmachine: Using API Version  1
I0108 21:10:40.061931  164703 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:10:40.062315  164703 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:10:40.062540  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetState
W0108 21:10:40.063951  164703 host.go:58] "multinode-472593-m03" host status: Stopped
I0108 21:10:40.066210  164703 out.go:177] * Starting worker node multinode-472593-m03 in cluster multinode-472593
I0108 21:10:40.067503  164703 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0108 21:10:40.067538  164703 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
I0108 21:10:40.067556  164703 cache.go:56] Caching tarball of preloaded images
I0108 21:10:40.067650  164703 preload.go:174] Found /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 21:10:40.067663  164703 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
I0108 21:10:40.067817  164703 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
I0108 21:10:40.068037  164703 start.go:365] acquiring machines lock for multinode-472593-m03: {Name:mk82511c12c99b4c49d70e636cfc8467781aa323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0108 21:10:40.068100  164703 start.go:369] acquired machines lock for "multinode-472593-m03" in 27.832µs
I0108 21:10:40.068132  164703 start.go:96] Skipping create...Using existing machine configuration
I0108 21:10:40.068144  164703 fix.go:54] fixHost starting: m03
I0108 21:10:40.068425  164703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 21:10:40.068456  164703 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 21:10:40.083936  164703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
I0108 21:10:40.084342  164703 main.go:141] libmachine: () Calling .GetVersion
I0108 21:10:40.084791  164703 main.go:141] libmachine: Using API Version  1
I0108 21:10:40.084810  164703 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 21:10:40.085146  164703 main.go:141] libmachine: () Calling .GetMachineName
I0108 21:10:40.085330  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:40.085475  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetState
I0108 21:10:40.086877  164703 fix.go:102] recreateIfNeeded on multinode-472593-m03: state=Stopped err=<nil>
I0108 21:10:40.086895  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
W0108 21:10:40.087037  164703 fix.go:128] unexpected machine state, will restart: <nil>
I0108 21:10:40.088922  164703 out.go:177] * Restarting existing kvm2 VM for "multinode-472593-m03" ...
I0108 21:10:40.090167  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .Start
I0108 21:10:40.090359  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring networks are active...
I0108 21:10:40.090924  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring network default is active
I0108 21:10:40.091277  164703 main.go:141] libmachine: (multinode-472593-m03) Ensuring network mk-multinode-472593 is active
I0108 21:10:40.091618  164703 main.go:141] libmachine: (multinode-472593-m03) Getting domain xml...
I0108 21:10:40.092250  164703 main.go:141] libmachine: (multinode-472593-m03) Creating domain...
I0108 21:10:41.355085  164703 main.go:141] libmachine: (multinode-472593-m03) Waiting to get IP...
I0108 21:10:41.356052  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:41.356556  164703 main.go:141] libmachine: (multinode-472593-m03) Found IP for machine: 192.168.39.70
I0108 21:10:41.356577  164703 main.go:141] libmachine: (multinode-472593-m03) Reserving static IP address...
I0108 21:10:41.356594  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has current primary IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:41.357078  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "multinode-472593-m03", mac: "52:54:00:96:bc:2d", ip: "192.168.39.70"} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:58 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:41.357121  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | skip adding static IP to network mk-multinode-472593 - found existing host DHCP lease matching {name: "multinode-472593-m03", mac: "52:54:00:96:bc:2d", ip: "192.168.39.70"}
I0108 21:10:41.357144  164703 main.go:141] libmachine: (multinode-472593-m03) Reserved static IP address: 192.168.39.70
I0108 21:10:41.357162  164703 main.go:141] libmachine: (multinode-472593-m03) Waiting for SSH to be available...
I0108 21:10:41.357176  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Getting to WaitForSSH function...
I0108 21:10:41.359304  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:41.359581  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:58 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:41.359624  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:41.359785  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Using SSH client type: external
I0108 21:10:41.359826  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa (-rw-------)
I0108 21:10:41.359865  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0108 21:10:41.359887  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | About to run SSH command:
I0108 21:10:41.359904  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | exit 0
I0108 21:10:53.457077  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | SSH cmd err, output: <nil>: 
I0108 21:10:53.457542  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetConfigRaw
I0108 21:10:53.458197  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
I0108 21:10:53.460679  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.461072  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.461114  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.461452  164703 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
I0108 21:10:53.461705  164703 machine.go:88] provisioning docker machine ...
I0108 21:10:53.461730  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:53.461946  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
I0108 21:10:53.462128  164703 buildroot.go:166] provisioning hostname "multinode-472593-m03"
I0108 21:10:53.462148  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
I0108 21:10:53.462305  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:53.464631  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.464971  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.465001  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.465135  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:53.465300  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:53.465451  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:53.465577  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:53.465722  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:53.466082  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:53.466103  164703 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-472593-m03 && echo "multinode-472593-m03" | sudo tee /etc/hostname
I0108 21:10:53.590342  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-472593-m03

                                                
                                                
I0108 21:10:53.590377  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:53.593358  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.593784  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.593826  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.593997  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:53.594218  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:53.594470  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:53.594642  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:53.594854  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:53.595280  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:53.595309  164703 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-472593-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-472593-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-472593-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0108 21:10:53.718616  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0108 21:10:53.718651  164703 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-142784/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-142784/.minikube}
I0108 21:10:53.718677  164703 buildroot.go:174] setting up certificates
I0108 21:10:53.718689  164703 provision.go:83] configureAuth start
I0108 21:10:53.718703  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetMachineName
I0108 21:10:53.718990  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
I0108 21:10:53.721533  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.721941  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.721996  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.722106  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:53.724535  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.724929  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.724967  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.725124  164703 provision.go:138] copyHostCerts
I0108 21:10:53.725198  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem, removing ...
I0108 21:10:53.725217  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
I0108 21:10:53.725297  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem (1679 bytes)
I0108 21:10:53.725442  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem, removing ...
I0108 21:10:53.725456  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
I0108 21:10:53.725492  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem (1078 bytes)
I0108 21:10:53.725592  164703 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem, removing ...
I0108 21:10:53.725605  164703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
I0108 21:10:53.725639  164703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem (1123 bytes)
I0108 21:10:53.725697  164703 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem org=jenkins.multinode-472593-m03 san=[192.168.39.70 192.168.39.70 localhost 127.0.0.1 minikube multinode-472593-m03]
I0108 21:10:53.867806  164703 provision.go:172] copyRemoteCerts
I0108 21:10:53.867869  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 21:10:53.867895  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:53.870480  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.870783  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:53.870812  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:53.870987  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:53.871179  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:53.871318  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:53.871424  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
I0108 21:10:53.954704  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0108 21:10:53.977224  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0108 21:10:53.999695  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0108 21:10:54.022652  164703 provision.go:86] duration metric: configureAuth took 303.944851ms
I0108 21:10:54.022692  164703 buildroot.go:189] setting minikube options for container-runtime
I0108 21:10:54.022923  164703 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 21:10:54.022952  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:54.023241  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:54.025848  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.026237  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:54.026264  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.026564  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:54.026758  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.026914  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.027012  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:54.027151  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:54.027528  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:54.027544  164703 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 21:10:54.139792  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0108 21:10:54.139816  164703 buildroot.go:70] root file system type: tmpfs
I0108 21:10:54.139919  164703 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 21:10:54.139941  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:54.142808  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.143243  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:54.143278  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.143531  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:54.143756  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.143947  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.144070  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:54.144261  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:54.144734  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:54.144841  164703 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 21:10:54.266412  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0108 21:10:54.266449  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:54.269259  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.269641  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:54.269668  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:54.269878  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:54.270084  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.270261  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:54.270434  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:54.270572  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:54.270877  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:54.270896  164703 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 21:10:55.103378  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0108 21:10:55.103411  164703 machine.go:91] provisioned docker machine in 1.641689016s
I0108 21:10:55.103426  164703 start.go:300] post-start starting for "multinode-472593-m03" (driver="kvm2")
I0108 21:10:55.103438  164703 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 21:10:55.103491  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:55.103799  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 21:10:55.103828  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:55.106292  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.106699  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:55.106729  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.106846  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:55.107049  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:55.107245  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:55.107383  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
I0108 21:10:55.191885  164703 ssh_runner.go:195] Run: cat /etc/os-release
I0108 21:10:55.196003  164703 info.go:137] Remote host: Buildroot 2021.02.12
I0108 21:10:55.196029  164703 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/addons for local assets ...
I0108 21:10:55.196105  164703 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/files for local assets ...
I0108 21:10:55.196171  164703 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> 1499882.pem in /etc/ssl/certs
I0108 21:10:55.196267  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 21:10:55.205832  164703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /etc/ssl/certs/1499882.pem (1708 bytes)
I0108 21:10:55.227720  164703 start.go:303] post-start completed in 124.276108ms
I0108 21:10:55.227759  164703 fix.go:56] fixHost completed within 15.159615742s
I0108 21:10:55.227788  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:55.230536  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.230880  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:55.230911  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.231120  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:55.231326  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:55.231502  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:55.231644  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:55.231797  164703 main.go:141] libmachine: Using SSH client type: native
I0108 21:10:55.232102  164703 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
I0108 21:10:55.232114  164703 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0108 21:10:55.341849  164703 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748255.293015959

                                                
                                                
I0108 21:10:55.341873  164703 fix.go:206] guest clock: 1704748255.293015959
I0108 21:10:55.341882  164703 fix.go:219] Guest: 2024-01-08 21:10:55.293015959 +0000 UTC Remote: 2024-01-08 21:10:55.227763405 +0000 UTC m=+15.232080509 (delta=65.252554ms)
I0108 21:10:55.341909  164703 fix.go:190] guest clock delta is within tolerance: 65.252554ms
I0108 21:10:55.341917  164703 start.go:83] releasing machines lock for "multinode-472593-m03", held for 15.273804374s
I0108 21:10:55.341940  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:55.342188  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetIP
I0108 21:10:55.344955  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.345269  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:55.345299  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.345436  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:55.345955  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:55.346146  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .DriverName
I0108 21:10:55.346246  164703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 21:10:55.346304  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:55.346357  164703 ssh_runner.go:195] Run: systemctl --version
I0108 21:10:55.346377  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHHostname
I0108 21:10:55.349127  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.349338  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.349613  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:55.349640  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.349828  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:55.349831  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:bc:2d", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:10:51 +0000 UTC Type:0 Mac:52:54:00:96:bc:2d Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:multinode-472593-m03 Clientid:01:52:54:00:96:bc:2d}
I0108 21:10:55.349899  164703 main.go:141] libmachine: (multinode-472593-m03) DBG | domain multinode-472593-m03 has defined IP address 192.168.39.70 and MAC address 52:54:00:96:bc:2d in network mk-multinode-472593
I0108 21:10:55.349973  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHPort
I0108 21:10:55.350048  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:55.350172  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:55.350220  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHKeyPath
I0108 21:10:55.350354  164703 main.go:141] libmachine: (multinode-472593-m03) Calling .GetSSHUsername
I0108 21:10:55.350351  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
I0108 21:10:55.350584  164703 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m03/id_rsa Username:docker}
I0108 21:10:55.455461  164703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0108 21:10:55.461134  164703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0108 21:10:55.461213  164703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0108 21:10:55.478067  164703 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0108 21:10:55.478114  164703 start.go:475] detecting cgroup driver to use...
I0108 21:10:55.478256  164703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 21:10:55.495063  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0108 21:10:55.505369  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0108 21:10:55.516380  164703 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0108 21:10:55.516454  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0108 21:10:55.527060  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 21:10:55.537379  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0108 21:10:55.548787  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0108 21:10:55.559526  164703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0108 21:10:55.570937  164703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0108 21:10:55.581545  164703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0108 21:10:55.591388  164703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0108 21:10:55.601219  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:10:55.706142  164703 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0108 21:10:55.723879  164703 start.go:475] detecting cgroup driver to use...
I0108 21:10:55.723981  164703 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 21:10:55.740141  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0108 21:10:55.756343  164703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0108 21:10:55.777285  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0108 21:10:55.790398  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 21:10:55.806197  164703 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0108 21:10:55.837017  164703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 21:10:55.851427  164703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 21:10:55.868350  164703 ssh_runner.go:195] Run: which cri-dockerd
I0108 21:10:55.872420  164703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0108 21:10:55.882827  164703 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0108 21:10:55.899435  164703 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 21:10:56.007182  164703 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 21:10:56.122036  164703 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
I0108 21:10:56.122218  164703 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0108 21:10:56.138395  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:10:56.239040  164703 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 21:10:57.652869  164703 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.413759946s)
I0108 21:10:57.652959  164703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 21:10:57.757282  164703 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0108 21:10:57.873462  164703 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 21:10:57.981530  164703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 21:10:58.085232  164703 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0108 21:10:58.100617  164703 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
I0108 21:10:58.113905  164703 out.go:177] 
W0108 21:10:58.115261  164703 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Mon 2024-01-08 21:10:51 UTC, ends at Mon 2024-01-08 21:10:58 UTC. --
Jan 08 21:10:52 minikube systemd[1]: Starting CRI Docker Socket for the API.
Jan 08 21:10:52 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Starting CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Listening on CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Failed to listen on CRI Docker Socket for the API.

                                                
                                                
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Mon 2024-01-08 21:10:51 UTC, ends at Mon 2024-01-08 21:10:58 UTC. --
Jan 08 21:10:52 minikube systemd[1]: Starting CRI Docker Socket for the API.
Jan 08 21:10:52 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Starting CRI Docker Socket for the API.
Jan 08 21:10:54 multinode-472593-m03 systemd[1]: Listening on CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Succeeded.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Closed CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Stopping CRI Docker Socket for the API.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Jan 08 21:10:58 multinode-472593-m03 systemd[1]: Failed to listen on CRI Docker Socket for the API.

                                                
                                                
-- /stdout --
W0108 21:10:58.115279  164703 out.go:239] * 
* 
W0108 21:10:58.117429  164703 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0108 21:10:58.118739  164703 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-linux-amd64 -p multinode-472593 node start m03 --alsologtostderr": exit status 90
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 status: exit status 2 (585.664417ms)

                                                
                                                
-- stdout --
	multinode-472593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472593-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472593-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:291: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-472593 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-472593 -n multinode-472593
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-472593 logs -n 25: (1.091069236s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-472593 cp multinode-472593:/home/docker/cp-test.txt                           | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03:/home/docker/cp-test_multinode-472593_multinode-472593-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n multinode-472593-m03 sudo cat                                   | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /home/docker/cp-test_multinode-472593_multinode-472593-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp testdata/cp-test.txt                                                | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2621877827/001/cp-test_multinode-472593-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593:/home/docker/cp-test_multinode-472593-m02_multinode-472593.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n multinode-472593 sudo cat                                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /home/docker/cp-test_multinode-472593-m02_multinode-472593.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03:/home/docker/cp-test_multinode-472593-m02_multinode-472593-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n multinode-472593-m03 sudo cat                                   | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /home/docker/cp-test_multinode-472593-m02_multinode-472593-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp testdata/cp-test.txt                                                | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2621877827/001/cp-test_multinode-472593-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593:/home/docker/cp-test_multinode-472593-m03_multinode-472593.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n multinode-472593 sudo cat                                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /home/docker/cp-test_multinode-472593-m03_multinode-472593.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt                       | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m02:/home/docker/cp-test_multinode-472593-m03_multinode-472593-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n                                                                 | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | multinode-472593-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-472593 ssh -n multinode-472593-m02 sudo cat                                   | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	|         | /home/docker/cp-test_multinode-472593-m03_multinode-472593-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-472593 node stop m03                                                          | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC | 08 Jan 24 21:10 UTC |
	| node    | multinode-472593 node start                                                             | multinode-472593 | jenkins | v1.32.0 | 08 Jan 24 21:10 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 21:07:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 21:07:31.132277  162103 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:07:31.132568  162103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:07:31.132579  162103 out.go:309] Setting ErrFile to fd 2...
	I0108 21:07:31.132584  162103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:07:31.132788  162103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 21:07:31.133420  162103 out.go:303] Setting JSON to false
	I0108 21:07:31.134347  162103 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6582,"bootTime":1704741469,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 21:07:31.134418  162103 start.go:138] virtualization: kvm guest
	I0108 21:07:31.136912  162103 out.go:177] * [multinode-472593] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 21:07:31.138475  162103 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 21:07:31.138427  162103 notify.go:220] Checking for updates...
	I0108 21:07:31.139784  162103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 21:07:31.141816  162103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:07:31.143405  162103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 21:07:31.144994  162103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 21:07:31.146395  162103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 21:07:31.147877  162103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 21:07:31.183864  162103 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 21:07:31.185295  162103 start.go:298] selected driver: kvm2
	I0108 21:07:31.185310  162103 start.go:902] validating driver "kvm2" against <nil>
	I0108 21:07:31.185325  162103 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 21:07:31.186342  162103 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:07:31.186433  162103 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-142784/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 21:07:31.201811  162103 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 21:07:31.201866  162103 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 21:07:31.202093  162103 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 21:07:31.202173  162103 cni.go:84] Creating CNI manager for ""
	I0108 21:07:31.202189  162103 cni.go:136] 0 nodes found, recommending kindnet
	I0108 21:07:31.202200  162103 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0108 21:07:31.202219  162103 start_flags.go:321] config:
	{Name:multinode-472593 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:07:31.202415  162103 iso.go:125] acquiring lock: {Name:mke23b0adb82dfaa94b41dcd107f45f9f7011388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 21:07:31.204616  162103 out.go:177] * Starting control plane node multinode-472593 in cluster multinode-472593
	I0108 21:07:31.205996  162103 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:07:31.206051  162103 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 21:07:31.206066  162103 cache.go:56] Caching tarball of preloaded images
	I0108 21:07:31.206157  162103 preload.go:174] Found /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:07:31.206170  162103 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:07:31.206503  162103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:07:31.206530  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json: {Name:mk4413bd2bdc37bc411ccf28be5883c57c0515bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:07:31.206681  162103 start.go:365] acquiring machines lock for multinode-472593: {Name:mk82511c12c99b4c49d70e636cfc8467781aa323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:07:31.206716  162103 start.go:369] acquired machines lock for "multinode-472593" in 20.624µs
	I0108 21:07:31.206736  162103 start.go:93] Provisioning new machine with config: &{Name:multinode-472593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:07:31.206810  162103 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 21:07:31.209429  162103 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:07:31.209882  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:07:31.209928  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:07:31.225069  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0108 21:07:31.225495  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:07:31.226043  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:07:31.226068  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:07:31.226408  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:07:31.226597  162103 main.go:141] libmachine: (multinode-472593) Calling .GetMachineName
	I0108 21:07:31.226763  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:31.226931  162103 start.go:159] libmachine.API.Create for "multinode-472593" (driver="kvm2")
	I0108 21:07:31.226965  162103 client.go:168] LocalClient.Create starting
	I0108 21:07:31.227001  162103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem
	I0108 21:07:31.227039  162103 main.go:141] libmachine: Decoding PEM data...
	I0108 21:07:31.227054  162103 main.go:141] libmachine: Parsing certificate...
	I0108 21:07:31.227117  162103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem
	I0108 21:07:31.227137  162103 main.go:141] libmachine: Decoding PEM data...
	I0108 21:07:31.227149  162103 main.go:141] libmachine: Parsing certificate...
	I0108 21:07:31.227166  162103 main.go:141] libmachine: Running pre-create checks...
	I0108 21:07:31.227176  162103 main.go:141] libmachine: (multinode-472593) Calling .PreCreateCheck
	I0108 21:07:31.227474  162103 main.go:141] libmachine: (multinode-472593) Calling .GetConfigRaw
	I0108 21:07:31.227898  162103 main.go:141] libmachine: Creating machine...
	I0108 21:07:31.227913  162103 main.go:141] libmachine: (multinode-472593) Calling .Create
	I0108 21:07:31.228054  162103 main.go:141] libmachine: (multinode-472593) Creating KVM machine...
	I0108 21:07:31.229272  162103 main.go:141] libmachine: (multinode-472593) DBG | found existing default KVM network
	I0108 21:07:31.229946  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:31.229825  162126 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015350}
	I0108 21:07:31.235489  162103 main.go:141] libmachine: (multinode-472593) DBG | trying to create private KVM network mk-multinode-472593 192.168.39.0/24...
	I0108 21:07:31.310595  162103 main.go:141] libmachine: (multinode-472593) DBG | private KVM network mk-multinode-472593 192.168.39.0/24 created
	I0108 21:07:31.310628  162103 main.go:141] libmachine: (multinode-472593) Setting up store path in /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593 ...
	I0108 21:07:31.310643  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:31.310547  162126 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 21:07:31.310661  162103 main.go:141] libmachine: (multinode-472593) Building disk image from file:///home/jenkins/minikube-integration/17866-142784/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:07:31.310757  162103 main.go:141] libmachine: (multinode-472593) Downloading /home/jenkins/minikube-integration/17866-142784/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-142784/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:07:31.529465  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:31.529289  162126 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa...
	I0108 21:07:31.659795  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:31.659640  162126 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/multinode-472593.rawdisk...
	I0108 21:07:31.659834  162103 main.go:141] libmachine: (multinode-472593) DBG | Writing magic tar header
	I0108 21:07:31.659867  162103 main.go:141] libmachine: (multinode-472593) DBG | Writing SSH key tar header
	I0108 21:07:31.659879  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593 (perms=drwx------)
	I0108 21:07:31.659892  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:31.659758  162126 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593 ...
	I0108 21:07:31.659927  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:07:31.659951  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593
	I0108 21:07:31.659959  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube (perms=drwxr-xr-x)
	I0108 21:07:31.659977  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784 (perms=drwxrwxr-x)
	I0108 21:07:31.659992  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube/machines
	I0108 21:07:31.660004  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:07:31.660024  162103 main.go:141] libmachine: (multinode-472593) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:07:31.660033  162103 main.go:141] libmachine: (multinode-472593) Creating domain...
	I0108 21:07:31.660045  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 21:07:31.660057  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784
	I0108 21:07:31.660085  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:07:31.660104  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:07:31.660112  162103 main.go:141] libmachine: (multinode-472593) DBG | Checking permissions on dir: /home
	I0108 21:07:31.660121  162103 main.go:141] libmachine: (multinode-472593) DBG | Skipping /home - not owner
	I0108 21:07:31.661224  162103 main.go:141] libmachine: (multinode-472593) define libvirt domain using xml: 
	I0108 21:07:31.661255  162103 main.go:141] libmachine: (multinode-472593) <domain type='kvm'>
	I0108 21:07:31.661267  162103 main.go:141] libmachine: (multinode-472593)   <name>multinode-472593</name>
	I0108 21:07:31.661281  162103 main.go:141] libmachine: (multinode-472593)   <memory unit='MiB'>2200</memory>
	I0108 21:07:31.661297  162103 main.go:141] libmachine: (multinode-472593)   <vcpu>2</vcpu>
	I0108 21:07:31.661306  162103 main.go:141] libmachine: (multinode-472593)   <features>
	I0108 21:07:31.661319  162103 main.go:141] libmachine: (multinode-472593)     <acpi/>
	I0108 21:07:31.661330  162103 main.go:141] libmachine: (multinode-472593)     <apic/>
	I0108 21:07:31.661343  162103 main.go:141] libmachine: (multinode-472593)     <pae/>
	I0108 21:07:31.661352  162103 main.go:141] libmachine: (multinode-472593)     
	I0108 21:07:31.661366  162103 main.go:141] libmachine: (multinode-472593)   </features>
	I0108 21:07:31.661379  162103 main.go:141] libmachine: (multinode-472593)   <cpu mode='host-passthrough'>
	I0108 21:07:31.661426  162103 main.go:141] libmachine: (multinode-472593)   
	I0108 21:07:31.661453  162103 main.go:141] libmachine: (multinode-472593)   </cpu>
	I0108 21:07:31.661466  162103 main.go:141] libmachine: (multinode-472593)   <os>
	I0108 21:07:31.661490  162103 main.go:141] libmachine: (multinode-472593)     <type>hvm</type>
	I0108 21:07:31.661501  162103 main.go:141] libmachine: (multinode-472593)     <boot dev='cdrom'/>
	I0108 21:07:31.661508  162103 main.go:141] libmachine: (multinode-472593)     <boot dev='hd'/>
	I0108 21:07:31.661517  162103 main.go:141] libmachine: (multinode-472593)     <bootmenu enable='no'/>
	I0108 21:07:31.661529  162103 main.go:141] libmachine: (multinode-472593)   </os>
	I0108 21:07:31.661552  162103 main.go:141] libmachine: (multinode-472593)   <devices>
	I0108 21:07:31.661572  162103 main.go:141] libmachine: (multinode-472593)     <disk type='file' device='cdrom'>
	I0108 21:07:31.661591  162103 main.go:141] libmachine: (multinode-472593)       <source file='/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/boot2docker.iso'/>
	I0108 21:07:31.661604  162103 main.go:141] libmachine: (multinode-472593)       <target dev='hdc' bus='scsi'/>
	I0108 21:07:31.661632  162103 main.go:141] libmachine: (multinode-472593)       <readonly/>
	I0108 21:07:31.661648  162103 main.go:141] libmachine: (multinode-472593)     </disk>
	I0108 21:07:31.661666  162103 main.go:141] libmachine: (multinode-472593)     <disk type='file' device='disk'>
	I0108 21:07:31.661684  162103 main.go:141] libmachine: (multinode-472593)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:07:31.661707  162103 main.go:141] libmachine: (multinode-472593)       <source file='/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/multinode-472593.rawdisk'/>
	I0108 21:07:31.661724  162103 main.go:141] libmachine: (multinode-472593)       <target dev='hda' bus='virtio'/>
	I0108 21:07:31.661736  162103 main.go:141] libmachine: (multinode-472593)     </disk>
	I0108 21:07:31.661750  162103 main.go:141] libmachine: (multinode-472593)     <interface type='network'>
	I0108 21:07:31.661768  162103 main.go:141] libmachine: (multinode-472593)       <source network='mk-multinode-472593'/>
	I0108 21:07:31.661787  162103 main.go:141] libmachine: (multinode-472593)       <model type='virtio'/>
	I0108 21:07:31.661800  162103 main.go:141] libmachine: (multinode-472593)     </interface>
	I0108 21:07:31.661813  162103 main.go:141] libmachine: (multinode-472593)     <interface type='network'>
	I0108 21:07:31.661824  162103 main.go:141] libmachine: (multinode-472593)       <source network='default'/>
	I0108 21:07:31.661832  162103 main.go:141] libmachine: (multinode-472593)       <model type='virtio'/>
	I0108 21:07:31.661841  162103 main.go:141] libmachine: (multinode-472593)     </interface>
	I0108 21:07:31.661849  162103 main.go:141] libmachine: (multinode-472593)     <serial type='pty'>
	I0108 21:07:31.661860  162103 main.go:141] libmachine: (multinode-472593)       <target port='0'/>
	I0108 21:07:31.661869  162103 main.go:141] libmachine: (multinode-472593)     </serial>
	I0108 21:07:31.661885  162103 main.go:141] libmachine: (multinode-472593)     <console type='pty'>
	I0108 21:07:31.661903  162103 main.go:141] libmachine: (multinode-472593)       <target type='serial' port='0'/>
	I0108 21:07:31.661917  162103 main.go:141] libmachine: (multinode-472593)     </console>
	I0108 21:07:31.661928  162103 main.go:141] libmachine: (multinode-472593)     <rng model='virtio'>
	I0108 21:07:31.661944  162103 main.go:141] libmachine: (multinode-472593)       <backend model='random'>/dev/random</backend>
	I0108 21:07:31.661955  162103 main.go:141] libmachine: (multinode-472593)     </rng>
	I0108 21:07:31.661968  162103 main.go:141] libmachine: (multinode-472593)     
	I0108 21:07:31.661987  162103 main.go:141] libmachine: (multinode-472593)     
	I0108 21:07:31.662001  162103 main.go:141] libmachine: (multinode-472593)   </devices>
	I0108 21:07:31.662012  162103 main.go:141] libmachine: (multinode-472593) </domain>
	I0108 21:07:31.662040  162103 main.go:141] libmachine: (multinode-472593) 
	I0108 21:07:31.666315  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:39:27:5a in network default
	I0108 21:07:31.666930  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:31.666960  162103 main.go:141] libmachine: (multinode-472593) Ensuring networks are active...
	I0108 21:07:31.667810  162103 main.go:141] libmachine: (multinode-472593) Ensuring network default is active
	I0108 21:07:31.668228  162103 main.go:141] libmachine: (multinode-472593) Ensuring network mk-multinode-472593 is active
	I0108 21:07:31.668737  162103 main.go:141] libmachine: (multinode-472593) Getting domain xml...
	I0108 21:07:31.669444  162103 main.go:141] libmachine: (multinode-472593) Creating domain...
	I0108 21:07:32.911473  162103 main.go:141] libmachine: (multinode-472593) Waiting to get IP...
	I0108 21:07:32.912248  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:32.912590  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:32.912615  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:32.912554  162126 retry.go:31] will retry after 203.294619ms: waiting for machine to come up
	I0108 21:07:33.116991  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:33.117385  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:33.117445  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:33.117329  162126 retry.go:31] will retry after 279.875841ms: waiting for machine to come up
	I0108 21:07:33.398973  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:33.399364  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:33.399422  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:33.399293  162126 retry.go:31] will retry after 461.145592ms: waiting for machine to come up
	I0108 21:07:33.861943  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:33.862398  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:33.862432  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:33.862355  162126 retry.go:31] will retry after 479.832189ms: waiting for machine to come up
	I0108 21:07:34.345461  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:34.345931  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:34.345965  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:34.345870  162126 retry.go:31] will retry after 534.176392ms: waiting for machine to come up
	I0108 21:07:34.881618  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:34.881990  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:34.882020  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:34.881943  162126 retry.go:31] will retry after 906.768658ms: waiting for machine to come up
	I0108 21:07:35.790074  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:35.790538  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:35.790565  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:35.790492  162126 retry.go:31] will retry after 882.756654ms: waiting for machine to come up
	I0108 21:07:36.675079  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:36.675386  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:36.675407  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:36.675362  162126 retry.go:31] will retry after 1.186354192s: waiting for machine to come up
	I0108 21:07:37.863720  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:37.864077  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:37.864111  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:37.864050  162126 retry.go:31] will retry after 1.348973822s: waiting for machine to come up
	I0108 21:07:39.214600  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:39.215057  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:39.215081  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:39.215010  162126 retry.go:31] will retry after 1.53262301s: waiting for machine to come up
	I0108 21:07:40.749493  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:40.749868  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:40.749890  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:40.749830  162126 retry.go:31] will retry after 2.162432019s: waiting for machine to come up
	I0108 21:07:42.914694  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:42.915107  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:42.915134  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:42.915055  162126 retry.go:31] will retry after 2.397700557s: waiting for machine to come up
	I0108 21:07:45.315549  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:45.315866  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:45.315890  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:45.315826  162126 retry.go:31] will retry after 3.249372193s: waiting for machine to come up
	I0108 21:07:48.567723  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:48.568180  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find current IP address of domain multinode-472593 in network mk-multinode-472593
	I0108 21:07:48.568209  162103 main.go:141] libmachine: (multinode-472593) DBG | I0108 21:07:48.568122  162126 retry.go:31] will retry after 4.552078889s: waiting for machine to come up
	I0108 21:07:53.122577  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.123042  162103 main.go:141] libmachine: (multinode-472593) Found IP for machine: 192.168.39.250
	I0108 21:07:53.123068  162103 main.go:141] libmachine: (multinode-472593) Reserving static IP address...
	I0108 21:07:53.123084  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has current primary IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.123406  162103 main.go:141] libmachine: (multinode-472593) DBG | unable to find host DHCP lease matching {name: "multinode-472593", mac: "52:54:00:18:79:5e", ip: "192.168.39.250"} in network mk-multinode-472593
	I0108 21:07:53.197744  162103 main.go:141] libmachine: (multinode-472593) DBG | Getting to WaitForSSH function...
	I0108 21:07:53.197785  162103 main.go:141] libmachine: (multinode-472593) Reserved static IP address: 192.168.39.250
	I0108 21:07:53.197801  162103 main.go:141] libmachine: (multinode-472593) Waiting for SSH to be available...
	I0108 21:07:53.200333  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.200686  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:minikube Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.200720  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.200859  162103 main.go:141] libmachine: (multinode-472593) DBG | Using SSH client type: external
	I0108 21:07:53.200908  162103 main.go:141] libmachine: (multinode-472593) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa (-rw-------)
	I0108 21:07:53.200944  162103 main.go:141] libmachine: (multinode-472593) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.250 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:07:53.200964  162103 main.go:141] libmachine: (multinode-472593) DBG | About to run SSH command:
	I0108 21:07:53.200980  162103 main.go:141] libmachine: (multinode-472593) DBG | exit 0
	I0108 21:07:53.293006  162103 main.go:141] libmachine: (multinode-472593) DBG | SSH cmd err, output: <nil>: 
	I0108 21:07:53.293270  162103 main.go:141] libmachine: (multinode-472593) KVM machine creation complete!
	I0108 21:07:53.293643  162103 main.go:141] libmachine: (multinode-472593) Calling .GetConfigRaw
	I0108 21:07:53.294145  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:53.294346  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:53.294492  162103 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:07:53.294509  162103 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:07:53.295799  162103 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:07:53.295816  162103 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:07:53.295823  162103 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:07:53.295832  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:53.298003  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.298377  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.298409  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.298534  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:53.298728  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.298867  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.298980  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:53.299204  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:53.299698  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:53.299718  162103 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:07:53.420525  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:07:53.420545  162103 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:07:53.420553  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:53.423182  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.423482  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.423503  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.423644  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:53.423853  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.424016  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.424145  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:53.424277  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:53.424654  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:53.424670  162103 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:07:53.545577  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:07:53.545657  162103 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:07:53.545670  162103 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:07:53.545678  162103 main.go:141] libmachine: (multinode-472593) Calling .GetMachineName
	I0108 21:07:53.545932  162103 buildroot.go:166] provisioning hostname "multinode-472593"
	I0108 21:07:53.545958  162103 main.go:141] libmachine: (multinode-472593) Calling .GetMachineName
	I0108 21:07:53.546131  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:53.548769  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.549149  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.549186  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.549314  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:53.549485  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.549657  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.549807  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:53.549961  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:53.550271  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:53.550285  162103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-472593 && echo "multinode-472593" | sudo tee /etc/hostname
	I0108 21:07:53.680556  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-472593
	
	I0108 21:07:53.680590  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:53.683465  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.683780  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.683815  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.683981  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:53.684191  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.684352  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:53.684513  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:53.684716  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:53.685018  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:53.685034  162103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-472593' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-472593/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-472593' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:07:53.812527  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:07:53.812553  162103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-142784/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-142784/.minikube}
	I0108 21:07:53.812595  162103 buildroot.go:174] setting up certificates
	I0108 21:07:53.812605  162103 provision.go:83] configureAuth start
	I0108 21:07:53.812614  162103 main.go:141] libmachine: (multinode-472593) Calling .GetMachineName
	I0108 21:07:53.812898  162103 main.go:141] libmachine: (multinode-472593) Calling .GetIP
	I0108 21:07:53.815477  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.815821  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.815854  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.815984  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:53.818194  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.818538  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:53.818569  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:53.818666  162103 provision.go:138] copyHostCerts
	I0108 21:07:53.818706  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
	I0108 21:07:53.818754  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem, removing ...
	I0108 21:07:53.818767  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
	I0108 21:07:53.818833  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem (1078 bytes)
	I0108 21:07:53.818953  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
	I0108 21:07:53.818982  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem, removing ...
	I0108 21:07:53.818989  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
	I0108 21:07:53.819020  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem (1123 bytes)
	I0108 21:07:53.819109  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
	I0108 21:07:53.819134  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem, removing ...
	I0108 21:07:53.819140  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
	I0108 21:07:53.819173  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem (1679 bytes)
	I0108 21:07:53.819248  162103 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem org=jenkins.multinode-472593 san=[192.168.39.250 192.168.39.250 localhost 127.0.0.1 minikube multinode-472593]
	I0108 21:07:54.016803  162103 provision.go:172] copyRemoteCerts
	I0108 21:07:54.016874  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:07:54.016903  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:54.019505  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.019803  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:54.019836  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.019988  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:54.020182  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.020358  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:54.020515  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:07:54.110206  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:07:54.110310  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0108 21:07:54.132093  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:07:54.132186  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0108 21:07:54.153265  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:07:54.153360  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:07:54.174641  162103 provision.go:86] duration metric: configureAuth took 362.02223ms
	I0108 21:07:54.174669  162103 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:07:54.174842  162103 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:07:54.174867  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:54.175164  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:54.177605  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.178019  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:54.178043  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.178209  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:54.178403  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.178578  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.178725  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:54.178874  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:54.179328  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:54.179348  162103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:07:54.306837  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:07:54.306864  162103 buildroot.go:70] root file system type: tmpfs
	I0108 21:07:54.307001  162103 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:07:54.307028  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:54.309898  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.310201  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:54.310236  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.310449  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:54.310623  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.310766  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.310881  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:54.311023  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:54.311481  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:54.311575  162103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:07:54.445230  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:07:54.445264  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:54.447915  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.448274  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:54.448298  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:54.448427  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:54.448623  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.448787  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:54.448924  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:54.449100  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:54.449440  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:54.449459  162103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:07:55.186189  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:07:55.186215  162103 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:07:55.186226  162103 main.go:141] libmachine: (multinode-472593) Calling .GetURL
	I0108 21:07:55.187459  162103 main.go:141] libmachine: (multinode-472593) DBG | Using libvirt version 6000000
	I0108 21:07:55.189782  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.190067  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.190095  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.190307  162103 main.go:141] libmachine: Docker is up and running!
	I0108 21:07:55.190325  162103 main.go:141] libmachine: Reticulating splines...
	I0108 21:07:55.190332  162103 client.go:171] LocalClient.Create took 23.963358746s
	I0108 21:07:55.190353  162103 start.go:167] duration metric: libmachine.API.Create for "multinode-472593" took 23.963422891s
	I0108 21:07:55.190365  162103 start.go:300] post-start starting for "multinode-472593" (driver="kvm2")
	I0108 21:07:55.190379  162103 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:07:55.190400  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:55.190649  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:07:55.190688  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:55.192658  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.192960  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.192987  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.193144  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:55.193307  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:55.193483  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:55.193629  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:07:55.282935  162103 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:07:55.286763  162103 command_runner.go:130] > NAME=Buildroot
	I0108 21:07:55.286784  162103 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:07:55.286791  162103 command_runner.go:130] > ID=buildroot
	I0108 21:07:55.286809  162103 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:07:55.286819  162103 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:07:55.286950  162103 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:07:55.286971  162103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/addons for local assets ...
	I0108 21:07:55.287044  162103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/files for local assets ...
	I0108 21:07:55.287146  162103 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> 1499882.pem in /etc/ssl/certs
	I0108 21:07:55.287160  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> /etc/ssl/certs/1499882.pem
	I0108 21:07:55.287243  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:07:55.295376  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /etc/ssl/certs/1499882.pem (1708 bytes)
	I0108 21:07:55.315071  162103 start.go:303] post-start completed in 124.690155ms
	I0108 21:07:55.315122  162103 main.go:141] libmachine: (multinode-472593) Calling .GetConfigRaw
	I0108 21:07:55.315665  162103 main.go:141] libmachine: (multinode-472593) Calling .GetIP
	I0108 21:07:55.318107  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.318409  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.318439  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.318620  162103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:07:55.318829  162103 start.go:128] duration metric: createHost completed in 24.112004521s
	I0108 21:07:55.318850  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:55.320912  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.321193  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.321214  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.321352  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:55.321538  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:55.321696  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:55.321811  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:55.321957  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:07:55.322404  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0108 21:07:55.322422  162103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:07:55.441720  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748075.413092765
	
	I0108 21:07:55.441744  162103 fix.go:206] guest clock: 1704748075.413092765
	I0108 21:07:55.441754  162103 fix.go:219] Guest: 2024-01-08 21:07:55.413092765 +0000 UTC Remote: 2024-01-08 21:07:55.318841058 +0000 UTC m=+24.237109112 (delta=94.251707ms)
	I0108 21:07:55.441806  162103 fix.go:190] guest clock delta is within tolerance: 94.251707ms
	I0108 21:07:55.441812  162103 start.go:83] releasing machines lock for "multinode-472593", held for 24.235088469s
	I0108 21:07:55.441832  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:55.442132  162103 main.go:141] libmachine: (multinode-472593) Calling .GetIP
	I0108 21:07:55.444488  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.444802  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.444831  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.444971  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:55.445540  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:55.445719  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:07:55.445790  162103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:07:55.445839  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:55.445925  162103 ssh_runner.go:195] Run: cat /version.json
	I0108 21:07:55.445941  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:07:55.448347  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.448698  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.448721  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.448741  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.448965  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:55.449146  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:55.449195  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:55.449221  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:55.449430  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:07:55.449438  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:55.449608  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:07:55.449625  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:07:55.449750  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:07:55.449891  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:07:55.534853  162103 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0108 21:07:55.535066  162103 ssh_runner.go:195] Run: systemctl --version
	I0108 21:07:55.561647  162103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:07:55.561773  162103 command_runner.go:130] > systemd 247 (247)
	I0108 21:07:55.561802  162103 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0108 21:07:55.561879  162103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:07:55.567012  162103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:07:55.567072  162103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:07:55.567136  162103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:07:55.581353  162103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:07:55.581587  162103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:07:55.581609  162103 start.go:475] detecting cgroup driver to use...
	I0108 21:07:55.581727  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:07:55.598454  162103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:07:55.598847  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:07:55.607874  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:07:55.616667  162103 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:07:55.616725  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:07:55.625432  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:07:55.633956  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:07:55.642184  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:07:55.650500  162103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:07:55.659243  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:07:55.667905  162103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:07:55.675484  162103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:07:55.675564  162103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:07:55.683807  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:07:55.780163  162103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:07:55.797705  162103 start.go:475] detecting cgroup driver to use...
	I0108 21:07:55.797816  162103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:07:55.812017  162103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:07:55.812037  162103 command_runner.go:130] > [Unit]
	I0108 21:07:55.812044  162103 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:07:55.812052  162103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:07:55.812062  162103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:07:55.812071  162103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:07:55.812081  162103 command_runner.go:130] > StartLimitBurst=3
	I0108 21:07:55.812088  162103 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:07:55.812092  162103 command_runner.go:130] > [Service]
	I0108 21:07:55.812097  162103 command_runner.go:130] > Type=notify
	I0108 21:07:55.812104  162103 command_runner.go:130] > Restart=on-failure
	I0108 21:07:55.812111  162103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:07:55.812124  162103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:07:55.812134  162103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:07:55.812144  162103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:07:55.812160  162103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:07:55.812175  162103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:07:55.812185  162103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:07:55.812197  162103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:07:55.812212  162103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:07:55.812221  162103 command_runner.go:130] > ExecStart=
	I0108 21:07:55.812253  162103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0108 21:07:55.812269  162103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:07:55.812288  162103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:07:55.812303  162103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:07:55.812314  162103 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:07:55.812322  162103 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:07:55.812332  162103 command_runner.go:130] > LimitCORE=infinity
	I0108 21:07:55.812343  162103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:07:55.812355  162103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:07:55.812365  162103 command_runner.go:130] > TasksMax=infinity
	I0108 21:07:55.812373  162103 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:07:55.812392  162103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:07:55.812405  162103 command_runner.go:130] > Delegate=yes
	I0108 21:07:55.812419  162103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:07:55.812430  162103 command_runner.go:130] > KillMode=process
	I0108 21:07:55.812440  162103 command_runner.go:130] > [Install]
	I0108 21:07:55.812459  162103 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:07:55.812525  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:07:55.833503  162103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:07:55.856552  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:07:55.867923  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:07:55.879262  162103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:07:55.910222  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:07:55.922364  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:07:55.938217  162103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:07:55.938887  162103 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:07:55.942335  162103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:07:55.942434  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:07:55.950158  162103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:07:55.964151  162103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:07:56.061839  162103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:07:56.163922  162103 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:07:56.164098  162103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:07:56.179818  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:07:56.290716  162103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:07:57.683940  162103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.393185049s)
	I0108 21:07:57.684026  162103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:07:57.784403  162103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:07:57.893268  162103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:07:58.000767  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:07:58.108041  162103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:07:58.123534  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:07:58.223225  162103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 21:07:58.301788  162103 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:07:58.301866  162103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:07:58.307438  162103 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:07:58.307468  162103 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:07:58.307479  162103 command_runner.go:130] > Device: 16h/22d	Inode: 859         Links: 1
	I0108 21:07:58.307490  162103 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:07:58.307500  162103 command_runner.go:130] > Access: 2024-01-08 21:07:58.211306011 +0000
	I0108 21:07:58.307508  162103 command_runner.go:130] > Modify: 2024-01-08 21:07:58.211306011 +0000
	I0108 21:07:58.307513  162103 command_runner.go:130] > Change: 2024-01-08 21:07:58.214310338 +0000
	I0108 21:07:58.307517  162103 command_runner.go:130] >  Birth: -
	I0108 21:07:58.307741  162103 start.go:543] Will wait 60s for crictl version
	I0108 21:07:58.307799  162103 ssh_runner.go:195] Run: which crictl
	I0108 21:07:58.312209  162103 command_runner.go:130] > /usr/bin/crictl
	I0108 21:07:58.312300  162103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:07:58.360156  162103 command_runner.go:130] > Version:  0.1.0
	I0108 21:07:58.360177  162103 command_runner.go:130] > RuntimeName:  docker
	I0108 21:07:58.360182  162103 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:07:58.360188  162103 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:07:58.360429  162103 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:07:58.360538  162103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:07:58.385442  162103 command_runner.go:130] > 24.0.7
	I0108 21:07:58.386722  162103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:07:58.409429  162103 command_runner.go:130] > 24.0.7
	I0108 21:07:58.412275  162103 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:07:58.412315  162103 main.go:141] libmachine: (multinode-472593) Calling .GetIP
	I0108 21:07:58.414971  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:58.415250  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:07:58.415277  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:07:58.415519  162103 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:07:58.419456  162103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:07:58.430581  162103 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:07:58.430633  162103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:07:58.448241  162103 docker.go:671] Got preloaded images: 
	I0108 21:07:58.448265  162103 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0108 21:07:58.448308  162103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 21:07:58.457476  162103 command_runner.go:139] > {"Repositories":{}}
	I0108 21:07:58.457627  162103 ssh_runner.go:195] Run: which lz4
	I0108 21:07:58.461403  162103 command_runner.go:130] > /usr/bin/lz4
	I0108 21:07:58.461427  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0108 21:07:58.461495  162103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 21:07:58.465413  162103 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:07:58.465451  162103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 21:07:58.465489  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0108 21:08:00.038310  162103 docker.go:635] Took 1.576833 seconds to copy over tarball
	I0108 21:08:00.038381  162103 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 21:08:02.445832  162103 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.407418062s)
	I0108 21:08:02.445869  162103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 21:08:02.483713  162103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0108 21:08:02.493619  162103 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0108 21:08:02.493799  162103 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0108 21:08:02.508654  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:08:02.607340  162103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:08:06.717798  162103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.110411311s)
	I0108 21:08:06.717903  162103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0108 21:08:06.734791  162103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0108 21:08:06.734818  162103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0108 21:08:06.734823  162103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 21:08:06.734832  162103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0108 21:08:06.734841  162103 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0108 21:08:06.734864  162103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0108 21:08:06.734872  162103 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0108 21:08:06.734885  162103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:08:06.735897  162103 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0108 21:08:06.735923  162103 cache_images.go:84] Images are preloaded, skipping loading
	I0108 21:08:06.735992  162103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:08:06.764062  162103 command_runner.go:130] > cgroupfs
	I0108 21:08:06.764218  162103 cni.go:84] Creating CNI manager for ""
	I0108 21:08:06.764235  162103 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:08:06.764268  162103 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:08:06.764296  162103 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-472593 NodeName:multinode-472593 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:08:06.764445  162103 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-472593"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:08:06.764544  162103 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-472593 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:08:06.764617  162103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:08:06.772889  162103 command_runner.go:130] > kubeadm
	I0108 21:08:06.772908  162103 command_runner.go:130] > kubectl
	I0108 21:08:06.772914  162103 command_runner.go:130] > kubelet
	I0108 21:08:06.773018  162103 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 21:08:06.773085  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 21:08:06.780830  162103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0108 21:08:06.795356  162103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:08:06.809950  162103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 21:08:06.824664  162103 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0108 21:08:06.828136  162103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:08:06.841727  162103 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593 for IP: 192.168.39.250
	I0108 21:08:06.841768  162103 certs.go:190] acquiring lock for shared ca certs: {Name:mkac4a24ed34b812d829a04dcd5630cfa0273c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:06.841922  162103 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key
	I0108 21:08:06.841975  162103 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key
	I0108 21:08:06.842037  162103 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key
	I0108 21:08:06.842055  162103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt with IP's: []
	I0108 21:08:06.947439  162103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt ...
	I0108 21:08:06.947470  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt: {Name:mkcdd83ebfbceb89cb5a00595f60f04598df6638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:06.947665  162103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key ...
	I0108 21:08:06.947700  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key: {Name:mk414e60e253a629c5ca37e85c9cc49c504cf2c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:06.947834  162103 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key.6e35f005
	I0108 21:08:06.947853  162103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt.6e35f005 with IP's: [192.168.39.250 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 21:08:07.051251  162103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt.6e35f005 ...
	I0108 21:08:07.051281  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt.6e35f005: {Name:mkc3de46520696e0b859a833b2d3914f27bc1408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:07.051464  162103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key.6e35f005 ...
	I0108 21:08:07.051484  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key.6e35f005: {Name:mkbc78ce8871a889bfdd40c3207b3294ecfcc0b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:07.051573  162103 certs.go:337] copying /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt.6e35f005 -> /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt
	I0108 21:08:07.051674  162103 certs.go:341] copying /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key.6e35f005 -> /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key
	I0108 21:08:07.051763  162103 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.key
	I0108 21:08:07.051784  162103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.crt with IP's: []
	I0108 21:08:07.140650  162103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.crt ...
	I0108 21:08:07.140681  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.crt: {Name:mk25e6a4a2ebf9f840b65f5beb3a0f7e13a62d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:07.140868  162103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.key ...
	I0108 21:08:07.140888  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.key: {Name:mkd56f906c8a60b11337f0078594655fbaaf41c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:07.140977  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0108 21:08:07.141006  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0108 21:08:07.141027  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0108 21:08:07.141060  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0108 21:08:07.141079  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:08:07.141096  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:08:07.141112  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:08:07.141131  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:08:07.141192  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem (1338 bytes)
	W0108 21:08:07.141242  162103 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988_empty.pem, impossibly tiny 0 bytes
	I0108 21:08:07.141258  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:08:07.141300  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:08:07.141338  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:08:07.141369  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem (1679 bytes)
	I0108 21:08:07.141452  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem (1708 bytes)
	I0108 21:08:07.141497  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> /usr/share/ca-certificates/1499882.pem
	I0108 21:08:07.141517  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:08:07.141535  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem -> /usr/share/ca-certificates/149988.pem
	I0108 21:08:07.142168  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 21:08:07.165351  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 21:08:07.187886  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 21:08:07.210052  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 21:08:07.232731  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:08:07.254874  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:08:07.276646  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:08:07.298531  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:08:07.320527  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /usr/share/ca-certificates/1499882.pem (1708 bytes)
	I0108 21:08:07.342196  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:08:07.362855  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem --> /usr/share/ca-certificates/149988.pem (1338 bytes)
	I0108 21:08:07.384206  162103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 21:08:07.398902  162103 ssh_runner.go:195] Run: openssl version
	I0108 21:08:07.404153  162103 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:08:07.404237  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1499882.pem && ln -fs /usr/share/ca-certificates/1499882.pem /etc/ssl/certs/1499882.pem"
	I0108 21:08:07.413500  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1499882.pem
	I0108 21:08:07.417941  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:56 /usr/share/ca-certificates/1499882.pem
	I0108 21:08:07.418005  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:56 /usr/share/ca-certificates/1499882.pem
	I0108 21:08:07.418076  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1499882.pem
	I0108 21:08:07.423101  162103 command_runner.go:130] > 3ec20f2e
	I0108 21:08:07.423366  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1499882.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:08:07.432503  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:08:07.441634  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:08:07.445791  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:08:07.446016  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:08:07.446061  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:08:07.451723  162103 command_runner.go:130] > b5213941
	I0108 21:08:07.452040  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:08:07.461189  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149988.pem && ln -fs /usr/share/ca-certificates/149988.pem /etc/ssl/certs/149988.pem"
	I0108 21:08:07.470440  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149988.pem
	I0108 21:08:07.474736  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:56 /usr/share/ca-certificates/149988.pem
	I0108 21:08:07.474936  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:56 /usr/share/ca-certificates/149988.pem
	I0108 21:08:07.474997  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149988.pem
	I0108 21:08:07.480090  162103 command_runner.go:130] > 51391683
	I0108 21:08:07.480384  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/149988.pem /etc/ssl/certs/51391683.0"
	I0108 21:08:07.489879  162103 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:08:07.493711  162103 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:08:07.494004  162103 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:08:07.494065  162103 kubeadm.go:404] StartCluster: {Name:multinode-472593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:08:07.494178  162103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0108 21:08:07.512942  162103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 21:08:07.521322  162103 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0108 21:08:07.521350  162103 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0108 21:08:07.521359  162103 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0108 21:08:07.521442  162103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 21:08:07.529497  162103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 21:08:07.537511  162103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0108 21:08:07.537531  162103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0108 21:08:07.537538  162103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0108 21:08:07.537546  162103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:08:07.537569  162103 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 21:08:07.537608  162103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 21:08:07.889450  162103 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:08:07.889482  162103 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:08:19.504667  162103 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 21:08:19.504719  162103 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0108 21:08:19.504769  162103 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 21:08:19.504791  162103 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:08:19.504896  162103 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:08:19.504907  162103 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 21:08:19.505017  162103 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:08:19.505037  162103 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 21:08:19.505142  162103 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:08:19.505154  162103 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 21:08:19.505242  162103 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:08:19.507274  162103 out.go:204]   - Generating certificates and keys ...
	I0108 21:08:19.505299  162103 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 21:08:19.507370  162103 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 21:08:19.507388  162103 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0108 21:08:19.507464  162103 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 21:08:19.507474  162103 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0108 21:08:19.507562  162103 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:08:19.507585  162103 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 21:08:19.507666  162103 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:08:19.507676  162103 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0108 21:08:19.507743  162103 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 21:08:19.507751  162103 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0108 21:08:19.507811  162103 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 21:08:19.507821  162103 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0108 21:08:19.507864  162103 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 21:08:19.507873  162103 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0108 21:08:19.508030  162103 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-472593] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0108 21:08:19.508042  162103 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-472593] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0108 21:08:19.508110  162103 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 21:08:19.508119  162103 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0108 21:08:19.508288  162103 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-472593] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0108 21:08:19.508305  162103 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-472593] and IPs [192.168.39.250 127.0.0.1 ::1]
	I0108 21:08:19.508407  162103 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:08:19.508431  162103 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 21:08:19.508497  162103 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:08:19.508523  162103 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 21:08:19.508575  162103 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 21:08:19.508587  162103 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0108 21:08:19.508678  162103 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:08:19.508707  162103 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 21:08:19.508765  162103 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:08:19.508777  162103 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 21:08:19.508881  162103 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:08:19.508894  162103 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 21:08:19.508976  162103 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:08:19.509007  162103 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 21:08:19.509082  162103 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:08:19.509095  162103 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 21:08:19.509191  162103 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:08:19.509212  162103 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 21:08:19.509289  162103 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:08:19.511076  162103 out.go:204]   - Booting up control plane ...
	I0108 21:08:19.509335  162103 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 21:08:19.511181  162103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:08:19.511197  162103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 21:08:19.511303  162103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:08:19.511326  162103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 21:08:19.511432  162103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:08:19.511449  162103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 21:08:19.511591  162103 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:08:19.511606  162103 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:08:19.511737  162103 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:08:19.511760  162103 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:08:19.511816  162103 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 21:08:19.511827  162103 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:08:19.511980  162103 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:08:19.511996  162103 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 21:08:19.512085  162103 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.005104 seconds
	I0108 21:08:19.512097  162103 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.005104 seconds
	I0108 21:08:19.512219  162103 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:08:19.512239  162103 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 21:08:19.512422  162103 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:08:19.512444  162103 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 21:08:19.512509  162103 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:08:19.512525  162103 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0108 21:08:19.512707  162103 kubeadm.go:322] [mark-control-plane] Marking the node multinode-472593 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:08:19.512715  162103 command_runner.go:130] > [mark-control-plane] Marking the node multinode-472593 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 21:08:19.512794  162103 kubeadm.go:322] [bootstrap-token] Using token: y0my5t.z5aapqr7zydyfg1j
	I0108 21:08:19.512810  162103 command_runner.go:130] > [bootstrap-token] Using token: y0my5t.z5aapqr7zydyfg1j
	I0108 21:08:19.515310  162103 out.go:204]   - Configuring RBAC rules ...
	I0108 21:08:19.515412  162103 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:08:19.515423  162103 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 21:08:19.515517  162103 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:08:19.515525  162103 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 21:08:19.515645  162103 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:08:19.515654  162103 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 21:08:19.515796  162103 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:08:19.515818  162103 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 21:08:19.515961  162103 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:08:19.515981  162103 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 21:08:19.516048  162103 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:08:19.516055  162103 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 21:08:19.516171  162103 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:08:19.516189  162103 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 21:08:19.516256  162103 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0108 21:08:19.516265  162103 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 21:08:19.516310  162103 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0108 21:08:19.516314  162103 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 21:08:19.516332  162103 kubeadm.go:322] 
	I0108 21:08:19.516419  162103 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0108 21:08:19.516421  162103 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 21:08:19.516436  162103 kubeadm.go:322] 
	I0108 21:08:19.516528  162103 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0108 21:08:19.516537  162103 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 21:08:19.516544  162103 kubeadm.go:322] 
	I0108 21:08:19.516577  162103 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0108 21:08:19.516585  162103 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 21:08:19.516659  162103 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:08:19.516665  162103 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 21:08:19.516724  162103 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:08:19.516731  162103 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 21:08:19.516735  162103 kubeadm.go:322] 
	I0108 21:08:19.516790  162103 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0108 21:08:19.516795  162103 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 21:08:19.516799  162103 kubeadm.go:322] 
	I0108 21:08:19.516848  162103 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:08:19.516860  162103 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 21:08:19.516878  162103 kubeadm.go:322] 
	I0108 21:08:19.516949  162103 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0108 21:08:19.516956  162103 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 21:08:19.517023  162103 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:08:19.517029  162103 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 21:08:19.517109  162103 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:08:19.517119  162103 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 21:08:19.517129  162103 kubeadm.go:322] 
	I0108 21:08:19.517229  162103 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:08:19.517238  162103 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 21:08:19.517330  162103 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0108 21:08:19.517339  162103 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 21:08:19.517344  162103 kubeadm.go:322] 
	I0108 21:08:19.517446  162103 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token y0my5t.z5aapqr7zydyfg1j \
	I0108 21:08:19.517453  162103 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y0my5t.z5aapqr7zydyfg1j \
	I0108 21:08:19.517533  162103 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f \
	I0108 21:08:19.517539  162103 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f \
	I0108 21:08:19.517555  162103 command_runner.go:130] > 	--control-plane 
	I0108 21:08:19.517561  162103 kubeadm.go:322] 	--control-plane 
	I0108 21:08:19.517564  162103 kubeadm.go:322] 
	I0108 21:08:19.517625  162103 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:08:19.517630  162103 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 21:08:19.517634  162103 kubeadm.go:322] 
	I0108 21:08:19.517755  162103 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token y0my5t.z5aapqr7zydyfg1j \
	I0108 21:08:19.517770  162103 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y0my5t.z5aapqr7zydyfg1j \
	I0108 21:08:19.517889  162103 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f 
	I0108 21:08:19.517889  162103 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f 
	I0108 21:08:19.517923  162103 cni.go:84] Creating CNI manager for ""
	I0108 21:08:19.517931  162103 cni.go:136] 1 nodes found, recommending kindnet
	I0108 21:08:19.520522  162103 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0108 21:08:19.521971  162103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:08:19.526918  162103 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:08:19.526941  162103 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:08:19.526950  162103 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:08:19.526959  162103 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:08:19.526968  162103 command_runner.go:130] > Access: 2024-01-08 21:07:43.663177134 +0000
	I0108 21:08:19.526989  162103 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:08:19.527001  162103 command_runner.go:130] > Change: 2024-01-08 21:07:41.996177134 +0000
	I0108 21:08:19.527008  162103 command_runner.go:130] >  Birth: -
	I0108 21:08:19.527070  162103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:08:19.527091  162103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:08:19.560059  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:08:20.643707  162103 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0108 21:08:20.651403  162103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0108 21:08:20.659856  162103 command_runner.go:130] > serviceaccount/kindnet created
	I0108 21:08:20.673743  162103 command_runner.go:130] > daemonset.apps/kindnet created
	I0108 21:08:20.678155  162103 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.118056109s)
	I0108 21:08:20.678217  162103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 21:08:20.678287  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:20.678328  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-472593 minikube.k8s.io/updated_at=2024_01_08T21_08_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:20.874917  162103 command_runner.go:130] > -16
	I0108 21:08:20.915497  162103 command_runner.go:130] > node/multinode-472593 labeled
	I0108 21:08:20.917182  162103 ops.go:34] apiserver oom_adj: -16
	I0108 21:08:20.917234  162103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0108 21:08:20.917342  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:21.013226  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:21.417601  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:21.511181  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:21.917581  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:22.007204  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:22.417651  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:22.497895  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:22.918133  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:23.003761  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:23.417551  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:23.509073  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:23.917755  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:24.002638  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:24.418340  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:24.515837  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:24.918485  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:25.018233  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:25.417677  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:25.516468  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:25.918047  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:26.009719  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:26.417853  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:26.514266  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:26.917857  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:26.999153  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:27.417549  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:27.501214  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:27.918022  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:28.022296  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:28.418066  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:28.515839  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:28.918208  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:29.020022  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:29.417678  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:29.519105  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:29.917549  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:30.121103  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:30.417524  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:30.530613  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:30.917569  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:31.116176  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:31.417575  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:31.537418  162103 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0108 21:08:31.917932  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:08:32.107412  162103 command_runner.go:130] > NAME      SECRETS   AGE
	I0108 21:08:32.107434  162103 command_runner.go:130] > default   0         1s
	I0108 21:08:32.109284  162103 kubeadm.go:1088] duration metric: took 11.431054644s to wait for elevateKubeSystemPrivileges.
	I0108 21:08:32.109321  162103 kubeadm.go:406] StartCluster complete in 24.615263741s
	I0108 21:08:32.109342  162103 settings.go:142] acquiring lock: {Name:mk32f1c44073adbc0198c166687e97d412b8ea1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:32.109438  162103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:08:32.110286  162103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17866-142784/kubeconfig: {Name:mk25db342dc8c7710d3ec35eaf4d60d2b1ef29a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:08:32.110498  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 21:08:32.110536  162103 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0108 21:08:32.110632  162103 addons.go:69] Setting storage-provisioner=true in profile "multinode-472593"
	I0108 21:08:32.110661  162103 addons.go:237] Setting addon storage-provisioner=true in "multinode-472593"
	I0108 21:08:32.110639  162103 addons.go:69] Setting default-storageclass=true in profile "multinode-472593"
	I0108 21:08:32.110746  162103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-472593"
	I0108 21:08:32.110747  162103 host.go:66] Checking if "multinode-472593" exists ...
	I0108 21:08:32.110763  162103 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:08:32.110843  162103 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:08:32.111226  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:32.111238  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:32.111171  162103 kapi.go:59] client config for multinode-472593: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key", CAFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:08:32.111266  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:32.111264  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:32.111904  162103 cert_rotation.go:137] Starting client certificate rotation controller
	I0108 21:08:32.112228  162103 round_trippers.go:463] GET https://192.168.39.250:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:08:32.112245  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:32.112257  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:32.112266  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:32.122238  162103 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0108 21:08:32.122269  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:32.122280  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:32.122289  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:32.122296  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:32.122308  162103 round_trippers.go:580]     Content-Length: 291
	I0108 21:08:32.122318  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:32 GMT
	I0108 21:08:32.122329  162103 round_trippers.go:580]     Audit-Id: 6a1850cf-b548-4db2-8ba1-888a33465a6f
	I0108 21:08:32.122351  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:32.122398  162103 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bf8d61dc-88f8-4920-b261-602e1fccbaff","resourceVersion":"388","creationTimestamp":"2024-01-08T21:08:19Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:08:32.122949  162103 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bf8d61dc-88f8-4920-b261-602e1fccbaff","resourceVersion":"388","creationTimestamp":"2024-01-08T21:08:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:08:32.123008  162103 round_trippers.go:463] PUT https://192.168.39.250:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:08:32.123017  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:32.123024  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:32.123030  162103 round_trippers.go:473]     Content-Type: application/json
	I0108 21:08:32.123037  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:32.127568  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0108 21:08:32.127989  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:32.128543  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:32.128569  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:32.128978  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:32.129587  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:32.129616  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:32.130657  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0108 21:08:32.131026  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:32.131508  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:32.131523  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:32.131879  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:32.132052  162103 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:08:32.134200  162103 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:08:32.134528  162103 kapi.go:59] client config for multinode-472593: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key", CAFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:08:32.134858  162103 addons.go:237] Setting addon default-storageclass=true in "multinode-472593"
	I0108 21:08:32.134895  162103 host.go:66] Checking if "multinode-472593" exists ...
	I0108 21:08:32.135293  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:32.135328  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:32.138217  162103 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0108 21:08:32.138242  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:32.138251  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:32.138259  162103 round_trippers.go:580]     Content-Length: 291
	I0108 21:08:32.138266  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:32 GMT
	I0108 21:08:32.138273  162103 round_trippers.go:580]     Audit-Id: 648471ef-93ab-4e21-819e-a422559ea225
	I0108 21:08:32.138279  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:32.138287  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:32.138309  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:32.138334  162103 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bf8d61dc-88f8-4920-b261-602e1fccbaff","resourceVersion":"389","creationTimestamp":"2024-01-08T21:08:19Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0108 21:08:32.149812  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43917
	I0108 21:08:32.150249  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:32.150435  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0108 21:08:32.150753  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:32.150772  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:32.150802  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:32.151053  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:32.151227  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:32.151243  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:32.151559  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:32.151687  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:32.151723  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:32.151736  162103 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:08:32.153581  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:08:32.155732  162103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 21:08:32.157244  162103 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:08:32.157265  162103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 21:08:32.157289  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:08:32.160786  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:08:32.161249  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:08:32.161279  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:08:32.161436  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:08:32.161663  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:08:32.161836  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:08:32.161991  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:08:32.168373  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38731
	I0108 21:08:32.168872  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:32.169522  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:32.169539  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:32.169879  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:32.170046  162103 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:08:32.171734  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:08:32.172032  162103 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 21:08:32.172051  162103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 21:08:32.172070  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:08:32.174609  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:08:32.175021  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:08:32.175045  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:08:32.175151  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:08:32.175358  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:08:32.175498  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:08:32.175635  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:08:32.362418  162103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 21:08:32.382219  162103 command_runner.go:130] > apiVersion: v1
	I0108 21:08:32.382239  162103 command_runner.go:130] > data:
	I0108 21:08:32.382248  162103 command_runner.go:130] >   Corefile: |
	I0108 21:08:32.382252  162103 command_runner.go:130] >     .:53 {
	I0108 21:08:32.382256  162103 command_runner.go:130] >         errors
	I0108 21:08:32.382261  162103 command_runner.go:130] >         health {
	I0108 21:08:32.382265  162103 command_runner.go:130] >            lameduck 5s
	I0108 21:08:32.382269  162103 command_runner.go:130] >         }
	I0108 21:08:32.382272  162103 command_runner.go:130] >         ready
	I0108 21:08:32.382278  162103 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0108 21:08:32.382282  162103 command_runner.go:130] >            pods insecure
	I0108 21:08:32.382288  162103 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0108 21:08:32.382301  162103 command_runner.go:130] >            ttl 30
	I0108 21:08:32.382307  162103 command_runner.go:130] >         }
	I0108 21:08:32.382315  162103 command_runner.go:130] >         prometheus :9153
	I0108 21:08:32.382322  162103 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0108 21:08:32.382329  162103 command_runner.go:130] >            max_concurrent 1000
	I0108 21:08:32.382340  162103 command_runner.go:130] >         }
	I0108 21:08:32.382346  162103 command_runner.go:130] >         cache 30
	I0108 21:08:32.382352  162103 command_runner.go:130] >         loop
	I0108 21:08:32.382355  162103 command_runner.go:130] >         reload
	I0108 21:08:32.382359  162103 command_runner.go:130] >         loadbalance
	I0108 21:08:32.382363  162103 command_runner.go:130] >     }
	I0108 21:08:32.382370  162103 command_runner.go:130] > kind: ConfigMap
	I0108 21:08:32.382374  162103 command_runner.go:130] > metadata:
	I0108 21:08:32.382384  162103 command_runner.go:130] >   creationTimestamp: "2024-01-08T21:08:19Z"
	I0108 21:08:32.382389  162103 command_runner.go:130] >   name: coredns
	I0108 21:08:32.382393  162103 command_runner.go:130] >   namespace: kube-system
	I0108 21:08:32.382399  162103 command_runner.go:130] >   resourceVersion: "267"
	I0108 21:08:32.382408  162103 command_runner.go:130] >   uid: 2905597a-c51c-4e2c-8b14-5b2ad548c002
	I0108 21:08:32.385924  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 21:08:32.469026  162103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 21:08:32.613308  162103 round_trippers.go:463] GET https://192.168.39.250:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:08:32.613341  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:32.613354  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:32.613364  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:32.615965  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:32.615989  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:32.615996  162103 round_trippers.go:580]     Audit-Id: 41ed7913-0871-494a-92bc-6ac04db10d0d
	I0108 21:08:32.616003  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:32.616012  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:32.616021  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:32.616029  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:32.616036  162103 round_trippers.go:580]     Content-Length: 291
	I0108 21:08:32.616045  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:32 GMT
	I0108 21:08:32.616072  162103 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bf8d61dc-88f8-4920-b261-602e1fccbaff","resourceVersion":"399","creationTimestamp":"2024-01-08T21:08:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:08:32.616293  162103 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-472593" context rescaled to 1 replicas
	I0108 21:08:32.616328  162103 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0108 21:08:32.618153  162103 out.go:177] * Verifying Kubernetes components...
	I0108 21:08:32.619549  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:08:33.423292  162103 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0108 21:08:33.434766  162103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0108 21:08:33.449277  162103 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:08:33.459344  162103 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0108 21:08:33.466979  162103 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0108 21:08:33.479684  162103 command_runner.go:130] > pod/storage-provisioner created
	I0108 21:08:33.482340  162103 command_runner.go:130] > configmap/coredns replaced
	I0108 21:08:33.482381  162103 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.096425581s)
	I0108 21:08:33.482405  162103 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 21:08:33.482492  162103 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0108 21:08:33.482502  162103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.120053068s)
	I0108 21:08:33.482526  162103 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.013468339s)
	I0108 21:08:33.482540  162103 main.go:141] libmachine: Making call to close driver server
	I0108 21:08:33.482553  162103 main.go:141] libmachine: Making call to close driver server
	I0108 21:08:33.482569  162103 main.go:141] libmachine: (multinode-472593) Calling .Close
	I0108 21:08:33.482555  162103 main.go:141] libmachine: (multinode-472593) Calling .Close
	I0108 21:08:33.482861  162103 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:08:33.482879  162103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:08:33.482887  162103 main.go:141] libmachine: Making call to close driver server
	I0108 21:08:33.482894  162103 main.go:141] libmachine: (multinode-472593) Calling .Close
	I0108 21:08:33.482916  162103 main.go:141] libmachine: (multinode-472593) DBG | Closing plugin on server side
	I0108 21:08:33.483034  162103 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:08:33.483060  162103 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:08:33.483106  162103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:08:33.483124  162103 main.go:141] libmachine: Making call to close driver server
	I0108 21:08:33.483142  162103 main.go:141] libmachine: (multinode-472593) Calling .Close
	I0108 21:08:33.483228  162103 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:08:33.483276  162103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:08:33.483301  162103 main.go:141] libmachine: (multinode-472593) DBG | Closing plugin on server side
	I0108 21:08:33.483380  162103 main.go:141] libmachine: (multinode-472593) DBG | Closing plugin on server side
	I0108 21:08:33.483409  162103 round_trippers.go:463] GET https://192.168.39.250:8443/apis/storage.k8s.io/v1/storageclasses
	I0108 21:08:33.483419  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:33.483427  162103 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:08:33.483430  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:33.483436  162103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:08:33.483445  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:33.483424  162103 kapi.go:59] client config for multinode-472593: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key", CAFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:08:33.483764  162103 node_ready.go:35] waiting up to 6m0s for node "multinode-472593" to be "Ready" ...
	I0108 21:08:33.483844  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:33.483849  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:33.483856  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:33.483862  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:33.495171  162103 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0108 21:08:33.495199  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:33.495216  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:33.495225  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:33.495238  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:33.495246  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:33 GMT
	I0108 21:08:33.495259  162103 round_trippers.go:580]     Audit-Id: fc769add-1d08-493f-9efd-a2ab75cf283f
	I0108 21:08:33.495266  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:33.495521  162103 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0108 21:08:33.495542  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:33.495551  162103 round_trippers.go:580]     Audit-Id: 24bea656-f460-4cc9-ad56-efc854901502
	I0108 21:08:33.495562  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:33.495574  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:33.495583  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:33.495594  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:33.495605  162103 round_trippers.go:580]     Content-Length: 1273
	I0108 21:08:33.495626  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:33 GMT
	I0108 21:08:33.496702  162103 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"standard","uid":"ce015902-51cf-4864-83ba-ba4477c93183","resourceVersion":"404","creationTimestamp":"2024-01-08T21:08:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:08:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0108 21:08:33.496874  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:33.497363  162103 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ce015902-51cf-4864-83ba-ba4477c93183","resourceVersion":"404","creationTimestamp":"2024-01-08T21:08:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:08:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:08:33.497452  162103 round_trippers.go:463] PUT https://192.168.39.250:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0108 21:08:33.497465  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:33.497474  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:33.497485  162103 round_trippers.go:473]     Content-Type: application/json
	I0108 21:08:33.497496  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:33.500289  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:33.500309  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:33.500319  162103 round_trippers.go:580]     Content-Length: 1220
	I0108 21:08:33.500328  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:33 GMT
	I0108 21:08:33.500345  162103 round_trippers.go:580]     Audit-Id: 44db16b7-a121-42f2-b5a9-f6b47a7d7fa3
	I0108 21:08:33.500353  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:33.500365  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:33.500373  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:33.500383  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:33.500443  162103 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ce015902-51cf-4864-83ba-ba4477c93183","resourceVersion":"404","creationTimestamp":"2024-01-08T21:08:33Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-08T21:08:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0108 21:08:33.500600  162103 main.go:141] libmachine: Making call to close driver server
	I0108 21:08:33.500622  162103 main.go:141] libmachine: (multinode-472593) Calling .Close
	I0108 21:08:33.500904  162103 main.go:141] libmachine: Successfully made call to close driver server
	I0108 21:08:33.500922  162103 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 21:08:33.500907  162103 main.go:141] libmachine: (multinode-472593) DBG | Closing plugin on server side
	I0108 21:08:33.502956  162103 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0108 21:08:33.504409  162103 addons.go:508] enable addons completed in 1.393876004s: enabled=[storage-provisioner default-storageclass]
	I0108 21:08:33.985002  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:33.985031  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:33.985046  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:33.985056  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:33.987927  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:33.987976  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:33.987985  162103 round_trippers.go:580]     Audit-Id: 415633be-03ea-411c-b7d6-de4f711ce654
	I0108 21:08:33.987993  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:33.988000  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:33.988009  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:33.988018  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:33.988032  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:33 GMT
	I0108 21:08:33.988549  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:34.484235  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:34.484268  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:34.484282  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:34.484293  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:34.486930  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:34.486950  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:34.486957  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:34.486962  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:34.486968  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:34 GMT
	I0108 21:08:34.486973  162103 round_trippers.go:580]     Audit-Id: 3a02a4b9-1b0a-4204-9bcc-d57fe831123c
	I0108 21:08:34.486979  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:34.486988  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:34.487205  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:34.984994  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:34.985030  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:34.985042  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:34.985052  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:34.987535  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:34.987563  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:34.987574  162103 round_trippers.go:580]     Audit-Id: 2b649d59-8329-45b1-ae9f-bfeb2faaebac
	I0108 21:08:34.987587  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:34.987595  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:34.987602  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:34.987610  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:34.987619  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:34 GMT
	I0108 21:08:34.987769  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:35.484225  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:35.484249  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:35.484257  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:35.484263  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:35.486773  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:35.486794  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:35.486804  162103 round_trippers.go:580]     Audit-Id: be4e50ea-7103-4fa8-be56-93a0b9632d57
	I0108 21:08:35.486813  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:35.486820  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:35.486828  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:35.486835  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:35.486851  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:35 GMT
	I0108 21:08:35.486967  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:35.487413  162103 node_ready.go:58] node "multinode-472593" has status "Ready":"False"
	I0108 21:08:35.984629  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:35.984656  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:35.984669  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:35.984679  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:35.987363  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:35.987380  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:35.987399  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:35.987407  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:35.987415  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:35.987423  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:35.987432  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:35 GMT
	I0108 21:08:35.987441  162103 round_trippers.go:580]     Audit-Id: 2cf2bb34-ab34-425e-8423-7837ab2294f1
	I0108 21:08:35.987770  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:36.484106  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:36.484129  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:36.484137  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:36.484142  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:36.486772  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:36.486794  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:36.486803  162103 round_trippers.go:580]     Audit-Id: d4192068-9593-41c7-b35a-66f05a0ad7a4
	I0108 21:08:36.486811  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:36.486819  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:36.486826  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:36.486834  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:36.486842  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:36 GMT
	I0108 21:08:36.487359  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:36.984040  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:36.984079  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:36.984090  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:36.984100  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:36.986945  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:36.986971  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:36.986982  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:36 GMT
	I0108 21:08:36.986991  162103 round_trippers.go:580]     Audit-Id: 2d38030c-8723-40db-97ba-277afdcdc733
	I0108 21:08:36.986998  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:36.987006  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:36.987013  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:36.987021  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:36.987206  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:37.484674  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:37.484707  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:37.484719  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:37.484727  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:37.487447  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:37.487476  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:37.487483  162103 round_trippers.go:580]     Audit-Id: e34298ca-4ac1-4a73-a555-510191f6a104
	I0108 21:08:37.487489  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:37.487494  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:37.487499  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:37.487514  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:37.487523  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:37 GMT
	I0108 21:08:37.487877  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:37.488293  162103 node_ready.go:58] node "multinode-472593" has status "Ready":"False"
	I0108 21:08:37.984749  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:37.984783  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:37.984796  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:37.984805  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:37.987492  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:37.987522  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:37.987531  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:37.987552  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:37.987561  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:37 GMT
	I0108 21:08:37.987566  162103 round_trippers.go:580]     Audit-Id: 1807ec9e-793c-423e-be57-f7c312fce0c7
	I0108 21:08:37.987571  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:37.987580  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:37.987824  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:38.484515  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:38.484546  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:38.484555  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:38.484563  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:38.488344  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:38.488369  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:38.488379  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:38.488387  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:38 GMT
	I0108 21:08:38.488395  162103 round_trippers.go:580]     Audit-Id: 3646fcf4-b005-40ab-9f3e-1d35631ea4cf
	I0108 21:08:38.488402  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:38.488419  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:38.488441  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:38.488841  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:38.984446  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:38.984472  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:38.984483  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:38.984491  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:38.986883  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:38.986902  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:38.986909  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:38 GMT
	I0108 21:08:38.986914  162103 round_trippers.go:580]     Audit-Id: 25e970a8-85fc-4c05-93c0-4d5876c504ac
	I0108 21:08:38.986932  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:38.986941  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:38.986946  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:38.986951  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:38.987371  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:39.484080  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:39.484115  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:39.484128  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:39.484135  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:39.486863  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:39.486891  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:39.486900  162103 round_trippers.go:580]     Audit-Id: 15a932f8-9fe7-40bd-a826-f33c130a86f5
	I0108 21:08:39.486908  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:39.486915  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:39.486922  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:39.486929  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:39.486936  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:39 GMT
	I0108 21:08:39.487187  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:39.984921  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:39.984949  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:39.984957  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:39.984963  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:39.987559  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:39.987583  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:39.987593  162103 round_trippers.go:580]     Audit-Id: 41a4dc9d-2ce9-4daa-bbd7-080c6fe18b7c
	I0108 21:08:39.987601  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:39.987608  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:39.987616  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:39.987624  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:39.987633  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:39 GMT
	I0108 21:08:39.987782  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:39.988075  162103 node_ready.go:58] node "multinode-472593" has status "Ready":"False"
	I0108 21:08:40.484152  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:40.484184  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:40.484195  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:40.484204  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:40.486686  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:40.486707  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:40.486717  162103 round_trippers.go:580]     Audit-Id: aca35ba7-c1de-41bf-b3ed-2dc6b28b463f
	I0108 21:08:40.486725  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:40.486733  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:40.486742  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:40.486751  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:40.486762  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:40 GMT
	I0108 21:08:40.486880  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:40.984273  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:40.984305  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:40.984314  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:40.984319  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:40.987108  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:40.987131  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:40.987138  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:40.987146  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:40.987154  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:40.987166  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:40 GMT
	I0108 21:08:40.987178  162103 round_trippers.go:580]     Audit-Id: 2cec8d3e-118b-4a88-9a2d-9692252cbeb7
	I0108 21:08:40.987187  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:40.987337  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:41.485006  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:41.485035  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:41.485043  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:41.485049  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:41.488251  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:41.488285  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:41.488297  162103 round_trippers.go:580]     Audit-Id: 03b0f8ed-5a96-45a3-8780-7be86eb3a2e1
	I0108 21:08:41.488311  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:41.488317  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:41.488323  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:41.488328  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:41.488334  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:41 GMT
	I0108 21:08:41.488425  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:41.984007  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:41.984037  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:41.984046  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:41.984052  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:41.987023  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:41.987056  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:41.987067  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:41.987076  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:41.987085  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:41.987093  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:41 GMT
	I0108 21:08:41.987099  162103 round_trippers.go:580]     Audit-Id: fa7a1d76-7da1-4a6e-ba7a-70e2647716fe
	I0108 21:08:41.987112  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:41.987270  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:42.484618  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:42.484646  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:42.484654  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:42.484660  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:42.487827  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:42.487850  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:42.487858  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:42.487867  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:42.487875  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:42.487884  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:42.487893  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:42 GMT
	I0108 21:08:42.487903  162103 round_trippers.go:580]     Audit-Id: 0814641e-fe78-4ab7-8276-98c07fcdff43
	I0108 21:08:42.488077  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:42.488383  162103 node_ready.go:58] node "multinode-472593" has status "Ready":"False"
	I0108 21:08:42.984904  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:42.984933  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:42.984941  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:42.984947  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:42.988169  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:42.988196  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:42.988206  162103 round_trippers.go:580]     Audit-Id: 28167649-ec03-4538-8c54-2567ee450a65
	I0108 21:08:42.988215  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:42.988222  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:42.988230  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:42.988242  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:42.988255  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:42 GMT
	I0108 21:08:42.988369  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"364","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0108 21:08:43.485046  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:43.485081  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:43.485093  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:43.485103  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:43.493226  162103 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0108 21:08:43.493260  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:43.493271  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:43.493280  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:43.493287  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:43.493296  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:43 GMT
	I0108 21:08:43.493305  162103 round_trippers.go:580]     Audit-Id: 84e3da5c-bca5-4c5c-bd34-6f4f919148d8
	I0108 21:08:43.493312  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:43.493481  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:43.493893  162103 node_ready.go:49] node "multinode-472593" has status "Ready":"True"
	I0108 21:08:43.493918  162103 node_ready.go:38] duration metric: took 10.010132027s waiting for node "multinode-472593" to be "Ready" ...
	I0108 21:08:43.493932  162103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:08:43.494023  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:08:43.494049  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:43.494064  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:43.494072  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:43.497616  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:43.497644  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:43.497654  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:43.497663  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:43 GMT
	I0108 21:08:43.497674  162103 round_trippers.go:580]     Audit-Id: 8f9c84d1-b100-46cd-a54b-33d43c8136dd
	I0108 21:08:43.497682  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:43.497690  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:43.497700  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:43.498653  162103 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"435","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54012 chars]
	I0108 21:08:43.502790  162103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:43.502871  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:43.502885  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:43.502895  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:43.502907  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:43.504906  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:08:43.504927  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:43.504936  162103 round_trippers.go:580]     Audit-Id: 0268ab55-a892-4412-b151-e36559ede4ed
	I0108 21:08:43.504944  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:43.504960  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:43.504967  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:43.504977  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:43.504989  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:43 GMT
	I0108 21:08:43.505305  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"435","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:08:43.505747  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:43.505762  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:43.505769  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:43.505775  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:43.509189  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:43.509209  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:43.509226  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:43.509238  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:43.509246  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:43.509257  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:43 GMT
	I0108 21:08:43.509269  162103 round_trippers.go:580]     Audit-Id: 404e9a17-032e-43e6-937e-8383f0f71f8d
	I0108 21:08:43.509280  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:43.509941  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:44.003942  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:44.003977  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:44.003989  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:44.004000  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:44.008437  162103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:08:44.008470  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:44.008481  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:44.008497  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:44 GMT
	I0108 21:08:44.008510  162103 round_trippers.go:580]     Audit-Id: 3138fb29-18f5-44c2-8671-b877d52357d4
	I0108 21:08:44.008522  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:44.008531  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:44.008540  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:44.008684  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"435","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:08:44.009300  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:44.009323  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:44.009334  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:44.009344  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:44.011936  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:44.011962  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:44.011977  162103 round_trippers.go:580]     Audit-Id: 2f3f07ba-9f0a-4537-8d11-492da740cdcb
	I0108 21:08:44.011990  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:44.012003  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:44.012009  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:44.012014  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:44.012019  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:44 GMT
	I0108 21:08:44.012222  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:44.503993  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:44.504028  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:44.504041  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:44.504051  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:44.507328  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:44.507359  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:44.507370  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:44.507379  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:44.507391  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:44.507400  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:44.507411  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:44 GMT
	I0108 21:08:44.507422  162103 round_trippers.go:580]     Audit-Id: 75569ea6-5374-4ca2-a89b-c8b52f93ef08
	I0108 21:08:44.507566  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"435","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0108 21:08:44.508152  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:44.508178  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:44.508190  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:44.508201  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:44.510659  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:44.510680  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:44.510690  162103 round_trippers.go:580]     Audit-Id: c5a987a5-d9de-4cc8-80cb-e36824214042
	I0108 21:08:44.510696  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:44.510704  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:44.510712  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:44.510720  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:44.510732  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:44 GMT
	I0108 21:08:44.510884  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:45.003544  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:45.003576  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:45.003587  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:45.003596  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:45.006496  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:45.006518  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:45.006525  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:45.006530  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:44 GMT
	I0108 21:08:45.006535  162103 round_trippers.go:580]     Audit-Id: 915f3c10-07a9-4787-9fc1-24ffd975971a
	I0108 21:08:45.006540  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:45.006545  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:45.006559  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:45.006804  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"443","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6513 chars]
	I0108 21:08:45.007319  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:45.007336  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:45.007346  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:45.007356  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:45.009733  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:45.009754  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:45.009761  162103 round_trippers.go:580]     Audit-Id: 9b3fdfd7-1ea9-46f7-87c1-3f69f05b6674
	I0108 21:08:45.009767  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:45.009773  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:45.009781  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:45.009789  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:45.009797  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:45 GMT
	I0108 21:08:45.009961  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:45.503647  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:45.503674  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:45.503682  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:45.503688  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:45.506538  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:45.506562  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:45.506569  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:45 GMT
	I0108 21:08:45.506575  162103 round_trippers.go:580]     Audit-Id: d3460d16-61eb-48a4-9e48-c384ee29fca7
	I0108 21:08:45.506586  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:45.506595  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:45.506603  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:45.506610  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:45.506710  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"443","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6513 chars]
	I0108 21:08:45.507149  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:45.507160  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:45.507168  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:45.507174  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:45.509218  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:45.509237  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:45.509244  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:45.509250  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:45.509258  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:45.509264  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:45 GMT
	I0108 21:08:45.509269  162103 round_trippers.go:580]     Audit-Id: 11066bf2-b1f6-49e7-917d-7245b24d2452
	I0108 21:08:45.509275  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:45.509431  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:45.509702  162103 pod_ready.go:102] pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace has status "Ready":"False"
	I0108 21:08:46.003113  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:08:46.003141  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.003149  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.003155  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.006052  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.006081  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.006090  162103 round_trippers.go:580]     Audit-Id: e48c132c-bb3c-41fd-b932-ad171d3d08fc
	I0108 21:08:46.006097  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.006104  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.006111  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.006119  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.006127  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:45 GMT
	I0108 21:08:46.006409  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"450","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0108 21:08:46.006957  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.006975  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.006983  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.006990  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.009206  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.009225  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.009235  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.009244  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.009249  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.009254  162103 round_trippers.go:580]     Audit-Id: dacf75ef-18d2-4c2c-8436-c0302e8a020b
	I0108 21:08:46.009260  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.009264  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.009443  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.009841  162103 pod_ready.go:92] pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.009868  162103 pod_ready.go:81] duration metric: took 2.507056348s waiting for pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.009877  162103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.009937  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-472593
	I0108 21:08:46.009944  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.009951  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.009959  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.011948  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:08:46.011963  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.011969  162103 round_trippers.go:580]     Audit-Id: f73b4608-f6c9-4375-a2d0-87207c855aca
	I0108 21:08:46.011975  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.011981  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.011987  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.011992  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.011997  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.012300  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-472593","namespace":"kube-system","uid":"48fa98ec-2db1-4f47-9f6b-0a4e7ff632c8","resourceVersion":"377","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.250:2379","kubernetes.io/config.hash":"bbfec1d04b85f774100656f1f492ef89","kubernetes.io/config.mirror":"bbfec1d04b85f774100656f1f492ef89","kubernetes.io/config.seen":"2024-01-08T21:08:19.534831121Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0108 21:08:46.012772  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.012792  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.012803  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.012812  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.014690  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:08:46.014702  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.014708  162103 round_trippers.go:580]     Audit-Id: e3a9b9c5-595b-41fd-9fc8-f4fa780c5f95
	I0108 21:08:46.014713  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.014718  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.014723  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.014728  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.014734  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.014992  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.015378  162103 pod_ready.go:92] pod "etcd-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.015400  162103 pod_ready.go:81] duration metric: took 5.514852ms waiting for pod "etcd-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.015413  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.015462  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-472593
	I0108 21:08:46.015470  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.015477  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.015483  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.018119  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.018134  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.018140  162103 round_trippers.go:580]     Audit-Id: 647aa10e-8b0a-4485-8e08-2bfc1d4d4d0a
	I0108 21:08:46.018145  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.018151  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.018155  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.018163  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.018168  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.018595  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-472593","namespace":"kube-system","uid":"fec467b8-a037-4806-8c81-3d53bf2c4bf2","resourceVersion":"426","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.250:8443","kubernetes.io/config.hash":"b179c45695f1bdcc29858d4d51fc6758","kubernetes.io/config.mirror":"b179c45695f1bdcc29858d4d51fc6758","kubernetes.io/config.seen":"2024-01-08T21:08:19.534832719Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0108 21:08:46.019006  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.019019  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.019028  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.019034  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.020835  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:08:46.020847  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.020853  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.020858  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.020863  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.020871  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.020879  162103 round_trippers.go:580]     Audit-Id: 24284071-6276-43bd-83e7-b706c612850c
	I0108 21:08:46.020888  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.021199  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.021470  162103 pod_ready.go:92] pod "kube-apiserver-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.021484  162103 pod_ready.go:81] duration metric: took 6.065991ms waiting for pod "kube-apiserver-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.021493  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.021540  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-472593
	I0108 21:08:46.021547  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.021554  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.021562  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.024823  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:46.024844  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.024850  162103 round_trippers.go:580]     Audit-Id: c7de099f-b0f1-4346-88ce-5e7cdf6b6f7f
	I0108 21:08:46.024855  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.024860  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.024865  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.024870  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.024875  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.025009  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-472593","namespace":"kube-system","uid":"a73873ed-7df0-44f1-82ea-6653d7514a7a","resourceVersion":"424","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6cd028364979f2013eabd2e9e20d2c13","kubernetes.io/config.mirror":"6cd028364979f2013eabd2e9e20d2c13","kubernetes.io/config.seen":"2024-01-08T21:08:19.534833628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0108 21:08:46.025374  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.025387  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.025410  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.025421  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.027992  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.028011  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.028017  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.028022  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.028029  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.028035  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.028040  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.028045  162103 round_trippers.go:580]     Audit-Id: 5de5ea0f-dec2-412e-a88d-690dcff89b44
	I0108 21:08:46.028527  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.028785  162103 pod_ready.go:92] pod "kube-controller-manager-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.028799  162103 pod_ready.go:81] duration metric: took 7.300066ms waiting for pod "kube-controller-manager-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.028809  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4w4g" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.028868  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m4w4g
	I0108 21:08:46.028876  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.028882  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.028888  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.031159  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.031177  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.031183  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.031188  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.031194  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.031199  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.031204  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.031209  162103 round_trippers.go:580]     Audit-Id: d593356e-a16a-447f-a3e2-2f937e5147dc
	I0108 21:08:46.031342  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m4w4g","generateName":"kube-proxy-","namespace":"kube-system","uid":"1394b324-16bf-4300-ab4d-443652d36475","resourceVersion":"415","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fbfa7824-80a4-44c3-9492-5116ffb6419b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbfa7824-80a4-44c3-9492-5116ffb6419b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0108 21:08:46.031693  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.031704  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.031711  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.031716  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.035224  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:46.035245  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.035251  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.035257  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.035263  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.035268  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.035273  162103 round_trippers.go:580]     Audit-Id: 6d17f20d-b40e-4c0b-85c0-abf714ce833c
	I0108 21:08:46.035279  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.035825  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.036089  162103 pod_ready.go:92] pod "kube-proxy-m4w4g" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.036102  162103 pod_ready.go:81] duration metric: took 7.28741ms waiting for pod "kube-proxy-m4w4g" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.036112  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.203444  162103 request.go:629] Waited for 167.256823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-472593
	I0108 21:08:46.203521  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-472593
	I0108 21:08:46.203526  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.203534  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.203540  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.206959  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:46.206985  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.206993  162103 round_trippers.go:580]     Audit-Id: bb66e5a3-a6f6-431f-8416-5da3a3bec8b6
	I0108 21:08:46.206998  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.207003  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.207009  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.207014  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.207020  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.207402  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-472593","namespace":"kube-system","uid":"2e871a08-9d08-4085-a056-3a2daa441ea9","resourceVersion":"425","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cc2702e0e8122c22aff19fbe1088d968","kubernetes.io/config.mirror":"cc2702e0e8122c22aff19fbe1088d968","kubernetes.io/config.seen":"2024-01-08T21:08:19.534826671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0108 21:08:46.403341  162103 request.go:629] Waited for 195.539382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.403412  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:08:46.403420  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.403432  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.403443  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.406354  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.406379  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.406393  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.406402  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.406411  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.406419  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.406427  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.406432  162103 round_trippers.go:580]     Audit-Id: bcb18571-3d26-414a-820a-43267cd03061
	I0108 21:08:46.406866  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0108 21:08:46.407154  162103 pod_ready.go:92] pod "kube-scheduler-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:08:46.407170  162103 pod_ready.go:81] duration metric: took 371.052255ms waiting for pod "kube-scheduler-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:08:46.407180  162103 pod_ready.go:38] duration metric: took 2.913227985s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:08:46.407198  162103 api_server.go:52] waiting for apiserver process to appear ...
	I0108 21:08:46.407259  162103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:08:46.422027  162103 command_runner.go:130] > 1836
	I0108 21:08:46.422206  162103 api_server.go:72] duration metric: took 13.805841539s to wait for apiserver process to appear ...
	I0108 21:08:46.422227  162103 api_server.go:88] waiting for apiserver healthz status ...
	I0108 21:08:46.422248  162103 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0108 21:08:46.426992  162103 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0108 21:08:46.427072  162103 round_trippers.go:463] GET https://192.168.39.250:8443/version
	I0108 21:08:46.427085  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.427093  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.427098  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.428047  162103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0108 21:08:46.428068  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.428075  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.428081  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.428086  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.428091  162103 round_trippers.go:580]     Content-Length: 264
	I0108 21:08:46.428097  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.428109  162103 round_trippers.go:580]     Audit-Id: 9c8fd2e9-8a17-448d-b100-31f90bdd1e3a
	I0108 21:08:46.428119  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.428170  162103 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0108 21:08:46.428295  162103 api_server.go:141] control plane version: v1.28.4
	I0108 21:08:46.428312  162103 api_server.go:131] duration metric: took 6.07999ms to wait for apiserver health ...
	I0108 21:08:46.428319  162103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 21:08:46.603814  162103 request.go:629] Waited for 175.415554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:08:46.603881  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:08:46.603886  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.603893  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.603900  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.607685  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:46.607712  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.607720  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.607725  162103 round_trippers.go:580]     Audit-Id: 49e25fa1-033f-4e0c-90fc-1dae2d09f53e
	I0108 21:08:46.607731  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.607737  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.607753  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.607765  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.608778  162103 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"450","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0108 21:08:46.610443  162103 system_pods.go:59] 8 kube-system pods found
	I0108 21:08:46.610466  162103 system_pods.go:61] "coredns-5dd5756b68-wpmbp" [3dfbd2f3-95c8-4c55-9312-e79187f61d66] Running
	I0108 21:08:46.610471  162103 system_pods.go:61] "etcd-multinode-472593" [48fa98ec-2db1-4f47-9f6b-0a4e7ff632c8] Running
	I0108 21:08:46.610475  162103 system_pods.go:61] "kindnet-zhh5c" [0452fc75-b53d-4528-a098-bbf6f7f9b197] Running
	I0108 21:08:46.610479  162103 system_pods.go:61] "kube-apiserver-multinode-472593" [fec467b8-a037-4806-8c81-3d53bf2c4bf2] Running
	I0108 21:08:46.610483  162103 system_pods.go:61] "kube-controller-manager-multinode-472593" [a73873ed-7df0-44f1-82ea-6653d7514a7a] Running
	I0108 21:08:46.610487  162103 system_pods.go:61] "kube-proxy-m4w4g" [1394b324-16bf-4300-ab4d-443652d36475] Running
	I0108 21:08:46.610491  162103 system_pods.go:61] "kube-scheduler-multinode-472593" [2e871a08-9d08-4085-a056-3a2daa441ea9] Running
	I0108 21:08:46.610495  162103 system_pods.go:61] "storage-provisioner" [eb978531-85e2-4a55-8f95-4ff3bc1595c8] Running
	I0108 21:08:46.610502  162103 system_pods.go:74] duration metric: took 182.175843ms to wait for pod list to return data ...
	I0108 21:08:46.610515  162103 default_sa.go:34] waiting for default service account to be created ...
	I0108 21:08:46.804019  162103 request.go:629] Waited for 193.416823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:08:46.804080  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/default/serviceaccounts
	I0108 21:08:46.804085  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:46.804092  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:46.804099  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:46.806828  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:08:46.806848  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:46.806856  162103 round_trippers.go:580]     Content-Length: 261
	I0108 21:08:46.806861  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:46 GMT
	I0108 21:08:46.806866  162103 round_trippers.go:580]     Audit-Id: e2bfd66c-794e-4c44-b0f6-b74449a648d3
	I0108 21:08:46.806871  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:46.806876  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:46.806881  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:46.806885  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:46.806906  162103 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"72cf016e-4155-4fd5-93b6-17e4bf972530","resourceVersion":"373","creationTimestamp":"2024-01-08T21:08:31Z"}}]}
	I0108 21:08:46.807091  162103 default_sa.go:45] found service account: "default"
	I0108 21:08:46.807107  162103 default_sa.go:55] duration metric: took 196.58706ms for default service account to be created ...
	I0108 21:08:46.807115  162103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 21:08:47.003609  162103 request.go:629] Waited for 196.406382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:08:47.003680  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:08:47.003685  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:47.003692  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:47.003699  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:47.007143  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:47.007169  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:47.007177  162103 round_trippers.go:580]     Audit-Id: b2066073-418a-446b-b035-668729e9de1b
	I0108 21:08:47.007185  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:47.007193  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:47.007200  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:47.007209  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:47.007216  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:47 GMT
	I0108 21:08:47.008122  162103 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"450","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54128 chars]
	I0108 21:08:47.009833  162103 system_pods.go:86] 8 kube-system pods found
	I0108 21:08:47.009856  162103 system_pods.go:89] "coredns-5dd5756b68-wpmbp" [3dfbd2f3-95c8-4c55-9312-e79187f61d66] Running
	I0108 21:08:47.009861  162103 system_pods.go:89] "etcd-multinode-472593" [48fa98ec-2db1-4f47-9f6b-0a4e7ff632c8] Running
	I0108 21:08:47.009865  162103 system_pods.go:89] "kindnet-zhh5c" [0452fc75-b53d-4528-a098-bbf6f7f9b197] Running
	I0108 21:08:47.009869  162103 system_pods.go:89] "kube-apiserver-multinode-472593" [fec467b8-a037-4806-8c81-3d53bf2c4bf2] Running
	I0108 21:08:47.009874  162103 system_pods.go:89] "kube-controller-manager-multinode-472593" [a73873ed-7df0-44f1-82ea-6653d7514a7a] Running
	I0108 21:08:47.009878  162103 system_pods.go:89] "kube-proxy-m4w4g" [1394b324-16bf-4300-ab4d-443652d36475] Running
	I0108 21:08:47.009885  162103 system_pods.go:89] "kube-scheduler-multinode-472593" [2e871a08-9d08-4085-a056-3a2daa441ea9] Running
	I0108 21:08:47.009889  162103 system_pods.go:89] "storage-provisioner" [eb978531-85e2-4a55-8f95-4ff3bc1595c8] Running
	I0108 21:08:47.009895  162103 system_pods.go:126] duration metric: took 202.775908ms to wait for k8s-apps to be running ...
	I0108 21:08:47.009903  162103 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:08:47.009947  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:08:47.023863  162103 system_svc.go:56] duration metric: took 13.948152ms WaitForService to wait for kubelet.
	I0108 21:08:47.023892  162103 kubeadm.go:581] duration metric: took 14.407533574s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:08:47.023914  162103 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:08:47.203314  162103 request.go:629] Waited for 179.318108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I0108 21:08:47.203388  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I0108 21:08:47.203393  162103 round_trippers.go:469] Request Headers:
	I0108 21:08:47.203401  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:08:47.203407  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:08:47.206714  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:08:47.206734  162103 round_trippers.go:577] Response Headers:
	I0108 21:08:47.206740  162103 round_trippers.go:580]     Audit-Id: fb15d578-47b1-44dd-88d3-548d737fb64f
	I0108 21:08:47.206746  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:08:47.206751  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:08:47.206756  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:08:47.206761  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:08:47.206766  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:08:47 GMT
	I0108 21:08:47.206904  162103 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"430","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0108 21:08:47.207229  162103 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:08:47.207248  162103 node_conditions.go:123] node cpu capacity is 2
	I0108 21:08:47.207259  162103 node_conditions.go:105] duration metric: took 183.340838ms to run NodePressure ...
	I0108 21:08:47.207270  162103 start.go:228] waiting for startup goroutines ...
	I0108 21:08:47.207276  162103 start.go:233] waiting for cluster config update ...
	I0108 21:08:47.207285  162103 start.go:242] writing updated cluster config ...
	I0108 21:08:47.209362  162103 out.go:177] 
	I0108 21:08:47.210820  162103 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:08:47.210886  162103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:08:47.212693  162103 out.go:177] * Starting worker node multinode-472593-m02 in cluster multinode-472593
	I0108 21:08:47.213976  162103 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 21:08:47.214005  162103 cache.go:56] Caching tarball of preloaded images
	I0108 21:08:47.214124  162103 preload.go:174] Found /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 21:08:47.214136  162103 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 21:08:47.214209  162103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:08:47.214380  162103 start.go:365] acquiring machines lock for multinode-472593-m02: {Name:mk82511c12c99b4c49d70e636cfc8467781aa323 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 21:08:47.214421  162103 start.go:369] acquired machines lock for "multinode-472593-m02" in 22.358µs
	I0108 21:08:47.214438  162103 start.go:93] Provisioning new machine with config: &{Name:multinode-472593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:08:47.214561  162103 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0108 21:08:47.216339  162103 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0108 21:08:47.216440  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:08:47.216477  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:08:47.231184  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0108 21:08:47.231607  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:08:47.232095  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:08:47.232133  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:08:47.232452  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:08:47.232621  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetMachineName
	I0108 21:08:47.232775  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:08:47.232959  162103 start.go:159] libmachine.API.Create for "multinode-472593" (driver="kvm2")
	I0108 21:08:47.232987  162103 client.go:168] LocalClient.Create starting
	I0108 21:08:47.233027  162103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem
	I0108 21:08:47.233076  162103 main.go:141] libmachine: Decoding PEM data...
	I0108 21:08:47.233100  162103 main.go:141] libmachine: Parsing certificate...
	I0108 21:08:47.233164  162103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem
	I0108 21:08:47.233193  162103 main.go:141] libmachine: Decoding PEM data...
	I0108 21:08:47.233201  162103 main.go:141] libmachine: Parsing certificate...
	I0108 21:08:47.233220  162103 main.go:141] libmachine: Running pre-create checks...
	I0108 21:08:47.233236  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .PreCreateCheck
	I0108 21:08:47.233428  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetConfigRaw
	I0108 21:08:47.233893  162103 main.go:141] libmachine: Creating machine...
	I0108 21:08:47.233915  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .Create
	I0108 21:08:47.234050  162103 main.go:141] libmachine: (multinode-472593-m02) Creating KVM machine...
	I0108 21:08:47.235319  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found existing default KVM network
	I0108 21:08:47.235425  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found existing private KVM network mk-multinode-472593
	I0108 21:08:47.235620  162103 main.go:141] libmachine: (multinode-472593-m02) Setting up store path in /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02 ...
	I0108 21:08:47.235645  162103 main.go:141] libmachine: (multinode-472593-m02) Building disk image from file:///home/jenkins/minikube-integration/17866-142784/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 21:08:47.235717  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:47.235609  162486 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 21:08:47.235826  162103 main.go:141] libmachine: (multinode-472593-m02) Downloading /home/jenkins/minikube-integration/17866-142784/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17866-142784/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0108 21:08:47.443047  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:47.442846  162486 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa...
	I0108 21:08:47.529713  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:47.529584  162486 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/multinode-472593-m02.rawdisk...
	I0108 21:08:47.529752  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Writing magic tar header
	I0108 21:08:47.529769  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Writing SSH key tar header
	I0108 21:08:47.529777  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:47.529724  162486 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02 ...
	I0108 21:08:47.529830  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02
	I0108 21:08:47.529894  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02 (perms=drwx------)
	I0108 21:08:47.529918  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube/machines (perms=drwxr-xr-x)
	I0108 21:08:47.529929  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube/machines
	I0108 21:08:47.529937  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784/.minikube (perms=drwxr-xr-x)
	I0108 21:08:47.529950  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins/minikube-integration/17866-142784 (perms=drwxrwxr-x)
	I0108 21:08:47.529958  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 21:08:47.529969  162103 main.go:141] libmachine: (multinode-472593-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 21:08:47.529978  162103 main.go:141] libmachine: (multinode-472593-m02) Creating domain...
	I0108 21:08:47.529992  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 21:08:47.530002  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17866-142784
	I0108 21:08:47.530027  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 21:08:47.530052  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home/jenkins
	I0108 21:08:47.530068  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Checking permissions on dir: /home
	I0108 21:08:47.530088  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Skipping /home - not owner
	I0108 21:08:47.530984  162103 main.go:141] libmachine: (multinode-472593-m02) define libvirt domain using xml: 
	I0108 21:08:47.531013  162103 main.go:141] libmachine: (multinode-472593-m02) <domain type='kvm'>
	I0108 21:08:47.531026  162103 main.go:141] libmachine: (multinode-472593-m02)   <name>multinode-472593-m02</name>
	I0108 21:08:47.531040  162103 main.go:141] libmachine: (multinode-472593-m02)   <memory unit='MiB'>2200</memory>
	I0108 21:08:47.531049  162103 main.go:141] libmachine: (multinode-472593-m02)   <vcpu>2</vcpu>
	I0108 21:08:47.531055  162103 main.go:141] libmachine: (multinode-472593-m02)   <features>
	I0108 21:08:47.531064  162103 main.go:141] libmachine: (multinode-472593-m02)     <acpi/>
	I0108 21:08:47.531070  162103 main.go:141] libmachine: (multinode-472593-m02)     <apic/>
	I0108 21:08:47.531081  162103 main.go:141] libmachine: (multinode-472593-m02)     <pae/>
	I0108 21:08:47.531088  162103 main.go:141] libmachine: (multinode-472593-m02)     
	I0108 21:08:47.531094  162103 main.go:141] libmachine: (multinode-472593-m02)   </features>
	I0108 21:08:47.531102  162103 main.go:141] libmachine: (multinode-472593-m02)   <cpu mode='host-passthrough'>
	I0108 21:08:47.531130  162103 main.go:141] libmachine: (multinode-472593-m02)   
	I0108 21:08:47.531153  162103 main.go:141] libmachine: (multinode-472593-m02)   </cpu>
	I0108 21:08:47.531166  162103 main.go:141] libmachine: (multinode-472593-m02)   <os>
	I0108 21:08:47.531180  162103 main.go:141] libmachine: (multinode-472593-m02)     <type>hvm</type>
	I0108 21:08:47.531192  162103 main.go:141] libmachine: (multinode-472593-m02)     <boot dev='cdrom'/>
	I0108 21:08:47.531204  162103 main.go:141] libmachine: (multinode-472593-m02)     <boot dev='hd'/>
	I0108 21:08:47.531214  162103 main.go:141] libmachine: (multinode-472593-m02)     <bootmenu enable='no'/>
	I0108 21:08:47.531222  162103 main.go:141] libmachine: (multinode-472593-m02)   </os>
	I0108 21:08:47.531231  162103 main.go:141] libmachine: (multinode-472593-m02)   <devices>
	I0108 21:08:47.531243  162103 main.go:141] libmachine: (multinode-472593-m02)     <disk type='file' device='cdrom'>
	I0108 21:08:47.531266  162103 main.go:141] libmachine: (multinode-472593-m02)       <source file='/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/boot2docker.iso'/>
	I0108 21:08:47.531280  162103 main.go:141] libmachine: (multinode-472593-m02)       <target dev='hdc' bus='scsi'/>
	I0108 21:08:47.531293  162103 main.go:141] libmachine: (multinode-472593-m02)       <readonly/>
	I0108 21:08:47.531303  162103 main.go:141] libmachine: (multinode-472593-m02)     </disk>
	I0108 21:08:47.531309  162103 main.go:141] libmachine: (multinode-472593-m02)     <disk type='file' device='disk'>
	I0108 21:08:47.531324  162103 main.go:141] libmachine: (multinode-472593-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 21:08:47.531347  162103 main.go:141] libmachine: (multinode-472593-m02)       <source file='/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/multinode-472593-m02.rawdisk'/>
	I0108 21:08:47.531362  162103 main.go:141] libmachine: (multinode-472593-m02)       <target dev='hda' bus='virtio'/>
	I0108 21:08:47.531374  162103 main.go:141] libmachine: (multinode-472593-m02)     </disk>
	I0108 21:08:47.531388  162103 main.go:141] libmachine: (multinode-472593-m02)     <interface type='network'>
	I0108 21:08:47.531398  162103 main.go:141] libmachine: (multinode-472593-m02)       <source network='mk-multinode-472593'/>
	I0108 21:08:47.531406  162103 main.go:141] libmachine: (multinode-472593-m02)       <model type='virtio'/>
	I0108 21:08:47.531419  162103 main.go:141] libmachine: (multinode-472593-m02)     </interface>
	I0108 21:08:47.531434  162103 main.go:141] libmachine: (multinode-472593-m02)     <interface type='network'>
	I0108 21:08:47.531447  162103 main.go:141] libmachine: (multinode-472593-m02)       <source network='default'/>
	I0108 21:08:47.531460  162103 main.go:141] libmachine: (multinode-472593-m02)       <model type='virtio'/>
	I0108 21:08:47.531469  162103 main.go:141] libmachine: (multinode-472593-m02)     </interface>
	I0108 21:08:47.531481  162103 main.go:141] libmachine: (multinode-472593-m02)     <serial type='pty'>
	I0108 21:08:47.531495  162103 main.go:141] libmachine: (multinode-472593-m02)       <target port='0'/>
	I0108 21:08:47.531509  162103 main.go:141] libmachine: (multinode-472593-m02)     </serial>
	I0108 21:08:47.531522  162103 main.go:141] libmachine: (multinode-472593-m02)     <console type='pty'>
	I0108 21:08:47.531536  162103 main.go:141] libmachine: (multinode-472593-m02)       <target type='serial' port='0'/>
	I0108 21:08:47.531554  162103 main.go:141] libmachine: (multinode-472593-m02)     </console>
	I0108 21:08:47.531567  162103 main.go:141] libmachine: (multinode-472593-m02)     <rng model='virtio'>
	I0108 21:08:47.531576  162103 main.go:141] libmachine: (multinode-472593-m02)       <backend model='random'>/dev/random</backend>
	I0108 21:08:47.531586  162103 main.go:141] libmachine: (multinode-472593-m02)     </rng>
	I0108 21:08:47.531597  162103 main.go:141] libmachine: (multinode-472593-m02)     
	I0108 21:08:47.531608  162103 main.go:141] libmachine: (multinode-472593-m02)     
	I0108 21:08:47.531623  162103 main.go:141] libmachine: (multinode-472593-m02)   </devices>
	I0108 21:08:47.531636  162103 main.go:141] libmachine: (multinode-472593-m02) </domain>
	I0108 21:08:47.531644  162103 main.go:141] libmachine: (multinode-472593-m02) 
	I0108 21:08:47.538785  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:71:b0:36 in network default
	I0108 21:08:47.539300  162103 main.go:141] libmachine: (multinode-472593-m02) Ensuring networks are active...
	I0108 21:08:47.539333  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:47.540023  162103 main.go:141] libmachine: (multinode-472593-m02) Ensuring network default is active
	I0108 21:08:47.540334  162103 main.go:141] libmachine: (multinode-472593-m02) Ensuring network mk-multinode-472593 is active
	I0108 21:08:47.540602  162103 main.go:141] libmachine: (multinode-472593-m02) Getting domain xml...
	I0108 21:08:47.541288  162103 main.go:141] libmachine: (multinode-472593-m02) Creating domain...
	I0108 21:08:48.810914  162103 main.go:141] libmachine: (multinode-472593-m02) Waiting to get IP...
	I0108 21:08:48.811700  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:48.812107  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:48.812157  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:48.812098  162486 retry.go:31] will retry after 196.603876ms: waiting for machine to come up
	I0108 21:08:49.010619  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:49.011084  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:49.011123  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:49.011032  162486 retry.go:31] will retry after 325.334102ms: waiting for machine to come up
	I0108 21:08:49.337423  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:49.337822  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:49.337848  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:49.337794  162486 retry.go:31] will retry after 374.398644ms: waiting for machine to come up
	I0108 21:08:49.713560  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:49.714034  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:49.714063  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:49.713974  162486 retry.go:31] will retry after 592.398724ms: waiting for machine to come up
	I0108 21:08:50.307974  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:50.308493  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:50.308525  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:50.308442  162486 retry.go:31] will retry after 539.137222ms: waiting for machine to come up
	I0108 21:08:50.848880  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:50.849375  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:50.849415  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:50.849325  162486 retry.go:31] will retry after 904.625201ms: waiting for machine to come up
	I0108 21:08:51.755327  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:51.755837  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:51.755869  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:51.755781  162486 retry.go:31] will retry after 1.152618803s: waiting for machine to come up
	I0108 21:08:52.909749  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:52.910207  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:52.910239  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:52.910149  162486 retry.go:31] will retry after 1.057543164s: waiting for machine to come up
	I0108 21:08:53.969314  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:53.969707  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:53.969732  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:53.969670  162486 retry.go:31] will retry after 1.529255703s: waiting for machine to come up
	I0108 21:08:55.501387  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:55.501771  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:55.501803  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:55.501738  162486 retry.go:31] will retry after 1.989980469s: waiting for machine to come up
	I0108 21:08:57.493660  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:57.494168  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:57.494233  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:57.494140  162486 retry.go:31] will retry after 2.320896399s: waiting for machine to come up
	I0108 21:08:59.817605  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:08:59.818157  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:08:59.818186  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:08:59.818099  162486 retry.go:31] will retry after 2.955293558s: waiting for machine to come up
	I0108 21:09:02.774768  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:02.775142  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:09:02.775177  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:09:02.775098  162486 retry.go:31] will retry after 3.071684963s: waiting for machine to come up
	I0108 21:09:05.850449  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:05.850867  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find current IP address of domain multinode-472593-m02 in network mk-multinode-472593
	I0108 21:09:05.850894  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | I0108 21:09:05.850816  162486 retry.go:31] will retry after 3.803109866s: waiting for machine to come up
	I0108 21:09:09.656861  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.657373  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.657410  162103 main.go:141] libmachine: (multinode-472593-m02) Found IP for machine: 192.168.39.225
	I0108 21:09:09.657426  162103 main.go:141] libmachine: (multinode-472593-m02) Reserving static IP address...
	I0108 21:09:09.657853  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | unable to find host DHCP lease matching {name: "multinode-472593-m02", mac: "52:54:00:92:ba:0a", ip: "192.168.39.225"} in network mk-multinode-472593
	I0108 21:09:09.731082  162103 main.go:141] libmachine: (multinode-472593-m02) Reserved static IP address: 192.168.39.225
	I0108 21:09:09.731113  162103 main.go:141] libmachine: (multinode-472593-m02) Waiting for SSH to be available...
	I0108 21:09:09.731126  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Getting to WaitForSSH function...
	I0108 21:09:09.733663  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.734176  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:09.734211  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.734402  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Using SSH client type: external
	I0108 21:09:09.734426  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa (-rw-------)
	I0108 21:09:09.734452  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 21:09:09.734466  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | About to run SSH command:
	I0108 21:09:09.734488  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | exit 0
	I0108 21:09:09.833101  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | SSH cmd err, output: <nil>: 
	I0108 21:09:09.833420  162103 main.go:141] libmachine: (multinode-472593-m02) KVM machine creation complete!
	I0108 21:09:09.833771  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetConfigRaw
	I0108 21:09:09.834342  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:09.834551  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:09.834706  162103 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 21:09:09.834720  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetState
	I0108 21:09:09.835868  162103 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 21:09:09.835884  162103 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 21:09:09.835891  162103 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 21:09:09.835897  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:09.837855  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.838231  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:09.838256  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.838394  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:09.838568  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:09.838730  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:09.838861  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:09.839060  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:09.839414  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:09.839436  162103 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 21:09:09.969332  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:09:09.969366  162103 main.go:141] libmachine: Detecting the provisioner...
	I0108 21:09:09.969380  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:09.972769  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.973170  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:09.973196  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:09.973336  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:09.973581  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:09.973785  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:09.973963  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:09.974146  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:09.974600  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:09.974618  162103 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 21:09:10.105846  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 21:09:10.105903  162103 main.go:141] libmachine: found compatible host: buildroot
	I0108 21:09:10.105915  162103 main.go:141] libmachine: Provisioning with buildroot...
	I0108 21:09:10.105928  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetMachineName
	I0108 21:09:10.106168  162103 buildroot.go:166] provisioning hostname "multinode-472593-m02"
	I0108 21:09:10.106196  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetMachineName
	I0108 21:09:10.106366  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.109004  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.109351  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.109383  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.109549  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:10.109735  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.109899  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.110001  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:10.110143  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:10.110475  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:10.110490  162103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-472593-m02 && echo "multinode-472593-m02" | sudo tee /etc/hostname
	I0108 21:09:10.253089  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-472593-m02
	
	I0108 21:09:10.253124  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.256925  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.257325  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.257357  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.257561  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:10.257734  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.257928  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.258083  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:10.258285  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:10.258618  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:10.258655  162103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-472593-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-472593-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-472593-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 21:09:10.396488  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 21:09:10.396515  162103 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17866-142784/.minikube CaCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17866-142784/.minikube}
	I0108 21:09:10.396529  162103 buildroot.go:174] setting up certificates
	I0108 21:09:10.396538  162103 provision.go:83] configureAuth start
	I0108 21:09:10.396547  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetMachineName
	I0108 21:09:10.396840  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetIP
	I0108 21:09:10.399512  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.399835  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.399857  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.400070  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.402176  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.402511  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.402543  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.402674  162103 provision.go:138] copyHostCerts
	I0108 21:09:10.402733  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
	I0108 21:09:10.402776  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem, removing ...
	I0108 21:09:10.402785  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem
	I0108 21:09:10.402897  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/ca.pem (1078 bytes)
	I0108 21:09:10.402997  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
	I0108 21:09:10.403022  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem, removing ...
	I0108 21:09:10.403032  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem
	I0108 21:09:10.403066  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/cert.pem (1123 bytes)
	I0108 21:09:10.403112  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
	I0108 21:09:10.403130  162103 exec_runner.go:144] found /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem, removing ...
	I0108 21:09:10.403137  162103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem
	I0108 21:09:10.403159  162103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17866-142784/.minikube/key.pem (1679 bytes)
	I0108 21:09:10.403204  162103 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem org=jenkins.multinode-472593-m02 san=[192.168.39.225 192.168.39.225 localhost 127.0.0.1 minikube multinode-472593-m02]
	I0108 21:09:10.579059  162103 provision.go:172] copyRemoteCerts
	I0108 21:09:10.579119  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 21:09:10.579149  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.581956  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.582344  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.582374  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.582597  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:10.582781  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.582932  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:10.583048  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa Username:docker}
	I0108 21:09:10.678752  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0108 21:09:10.678826  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 21:09:10.698910  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0108 21:09:10.698997  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0108 21:09:10.718702  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0108 21:09:10.718762  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 21:09:10.740745  162103 provision.go:86] duration metric: configureAuth took 344.192311ms
	I0108 21:09:10.740786  162103 buildroot.go:189] setting minikube options for container-runtime
	I0108 21:09:10.741004  162103 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:09:10.741033  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:10.741330  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.744178  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.744607  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.744647  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.744749  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:10.744948  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.745122  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.745272  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:10.745444  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:10.745801  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:10.745815  162103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0108 21:09:10.879091  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0108 21:09:10.879115  162103 buildroot.go:70] root file system type: tmpfs
	I0108 21:09:10.879238  162103 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0108 21:09:10.879255  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:10.881850  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.882225  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:10.882251  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:10.882426  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:10.882596  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.882756  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:10.882886  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:10.883060  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:10.883427  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:10.883487  162103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.250"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0108 21:09:11.030789  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.250
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0108 21:09:11.030822  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:11.033714  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.034154  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:11.034189  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.034376  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:11.034540  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:11.034711  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:11.034807  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:11.034981  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:11.035308  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:11.035329  162103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0108 21:09:11.829360  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0108 21:09:11.829386  162103 main.go:141] libmachine: Checking connection to Docker...
	I0108 21:09:11.829420  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetURL
	I0108 21:09:11.830829  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | Using libvirt version 6000000
	I0108 21:09:11.833019  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.833352  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:11.833386  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.833552  162103 main.go:141] libmachine: Docker is up and running!
	I0108 21:09:11.833568  162103 main.go:141] libmachine: Reticulating splines...
	I0108 21:09:11.833576  162103 client.go:171] LocalClient.Create took 24.60057922s
	I0108 21:09:11.833596  162103 start.go:167] duration metric: libmachine.API.Create for "multinode-472593" took 24.600639669s
	I0108 21:09:11.833605  162103 start.go:300] post-start starting for "multinode-472593-m02" (driver="kvm2")
	I0108 21:09:11.833614  162103 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 21:09:11.833630  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:11.833876  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 21:09:11.833902  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:11.836163  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.836476  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:11.836509  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.836652  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:11.836821  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:11.836977  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:11.837140  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa Username:docker}
	I0108 21:09:11.930534  162103 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 21:09:11.934393  162103 command_runner.go:130] > NAME=Buildroot
	I0108 21:09:11.934413  162103 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0108 21:09:11.934418  162103 command_runner.go:130] > ID=buildroot
	I0108 21:09:11.934424  162103 command_runner.go:130] > VERSION_ID=2021.02.12
	I0108 21:09:11.934429  162103 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0108 21:09:11.934453  162103 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 21:09:11.934465  162103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/addons for local assets ...
	I0108 21:09:11.934536  162103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17866-142784/.minikube/files for local assets ...
	I0108 21:09:11.934626  162103 filesync.go:149] local asset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> 1499882.pem in /etc/ssl/certs
	I0108 21:09:11.934639  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> /etc/ssl/certs/1499882.pem
	I0108 21:09:11.934740  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0108 21:09:11.942576  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /etc/ssl/certs/1499882.pem (1708 bytes)
	I0108 21:09:11.964004  162103 start.go:303] post-start completed in 130.384949ms
	I0108 21:09:11.964049  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetConfigRaw
	I0108 21:09:11.964638  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetIP
	I0108 21:09:11.967125  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.967492  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:11.967530  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.967754  162103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/config.json ...
	I0108 21:09:11.967998  162103 start.go:128] duration metric: createHost completed in 24.75342281s
	I0108 21:09:11.968026  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:11.970635  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.971045  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:11.971068  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:11.971270  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:11.971446  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:11.971612  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:11.971790  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:11.971966  162103 main.go:141] libmachine: Using SSH client type: native
	I0108 21:09:11.972332  162103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0108 21:09:11.972344  162103 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 21:09:12.106101  162103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704748152.080511943
	
	I0108 21:09:12.106120  162103 fix.go:206] guest clock: 1704748152.080511943
	I0108 21:09:12.106127  162103 fix.go:219] Guest: 2024-01-08 21:09:12.080511943 +0000 UTC Remote: 2024-01-08 21:09:11.968012045 +0000 UTC m=+100.886280154 (delta=112.499898ms)
	I0108 21:09:12.106143  162103 fix.go:190] guest clock delta is within tolerance: 112.499898ms
	I0108 21:09:12.106148  162103 start.go:83] releasing machines lock for "multinode-472593-m02", held for 24.891718505s
	I0108 21:09:12.106165  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:12.106411  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetIP
	I0108 21:09:12.109092  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.109482  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:12.109505  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.111964  162103 out.go:177] * Found network options:
	I0108 21:09:12.113369  162103 out.go:177]   - NO_PROXY=192.168.39.250
	W0108 21:09:12.114749  162103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:09:12.114781  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:12.115447  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:12.115652  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:09:12.115767  162103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 21:09:12.115807  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	W0108 21:09:12.115830  162103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0108 21:09:12.115911  162103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0108 21:09:12.115935  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:09:12.118542  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.118833  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:12.118876  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.118936  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.118982  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:12.119165  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:12.119274  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:12.119317  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:12.119337  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:12.119454  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:09:12.119547  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa Username:docker}
	I0108 21:09:12.119635  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:09:12.119803  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:09:12.119917  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa Username:docker}
	I0108 21:09:12.233434  162103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0108 21:09:12.233766  162103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0108 21:09:12.233809  162103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 21:09:12.233864  162103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 21:09:12.247627  162103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0108 21:09:12.247682  162103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 21:09:12.247699  162103 start.go:475] detecting cgroup driver to use...
	I0108 21:09:12.247849  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:09:12.265343  162103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0108 21:09:12.265744  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 21:09:12.275722  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 21:09:12.285468  162103 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 21:09:12.285540  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 21:09:12.296014  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:09:12.306410  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 21:09:12.317455  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 21:09:12.327890  162103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 21:09:12.338451  162103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 21:09:12.348431  162103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 21:09:12.358243  162103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0108 21:09:12.358448  162103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 21:09:12.367391  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:12.481181  162103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 21:09:12.499832  162103 start.go:475] detecting cgroup driver to use...
	I0108 21:09:12.499924  162103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0108 21:09:12.512245  162103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0108 21:09:12.513135  162103 command_runner.go:130] > [Unit]
	I0108 21:09:12.513153  162103 command_runner.go:130] > Description=Docker Application Container Engine
	I0108 21:09:12.513160  162103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0108 21:09:12.513166  162103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0108 21:09:12.513174  162103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0108 21:09:12.513180  162103 command_runner.go:130] > StartLimitBurst=3
	I0108 21:09:12.513192  162103 command_runner.go:130] > StartLimitIntervalSec=60
	I0108 21:09:12.513198  162103 command_runner.go:130] > [Service]
	I0108 21:09:12.513202  162103 command_runner.go:130] > Type=notify
	I0108 21:09:12.513208  162103 command_runner.go:130] > Restart=on-failure
	I0108 21:09:12.513219  162103 command_runner.go:130] > Environment=NO_PROXY=192.168.39.250
	I0108 21:09:12.513229  162103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0108 21:09:12.513240  162103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0108 21:09:12.513248  162103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0108 21:09:12.513257  162103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0108 21:09:12.513266  162103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0108 21:09:12.513275  162103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0108 21:09:12.513286  162103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0108 21:09:12.513301  162103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0108 21:09:12.513315  162103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0108 21:09:12.513325  162103 command_runner.go:130] > ExecStart=
	I0108 21:09:12.513340  162103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0108 21:09:12.513353  162103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0108 21:09:12.513360  162103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0108 21:09:12.513369  162103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0108 21:09:12.513376  162103 command_runner.go:130] > LimitNOFILE=infinity
	I0108 21:09:12.513380  162103 command_runner.go:130] > LimitNPROC=infinity
	I0108 21:09:12.513387  162103 command_runner.go:130] > LimitCORE=infinity
	I0108 21:09:12.513404  162103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0108 21:09:12.513416  162103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0108 21:09:12.513426  162103 command_runner.go:130] > TasksMax=infinity
	I0108 21:09:12.513456  162103 command_runner.go:130] > TimeoutStartSec=0
	I0108 21:09:12.513474  162103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0108 21:09:12.513483  162103 command_runner.go:130] > Delegate=yes
	I0108 21:09:12.513497  162103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0108 21:09:12.513509  162103 command_runner.go:130] > KillMode=process
	I0108 21:09:12.513519  162103 command_runner.go:130] > [Install]
	I0108 21:09:12.513524  162103 command_runner.go:130] > WantedBy=multi-user.target
	I0108 21:09:12.513649  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:09:12.526136  162103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0108 21:09:12.546937  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0108 21:09:12.559805  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:09:12.571773  162103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 21:09:12.604053  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 21:09:12.616020  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 21:09:12.632877  162103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0108 21:09:12.632975  162103 ssh_runner.go:195] Run: which cri-dockerd
	I0108 21:09:12.636392  162103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0108 21:09:12.636504  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0108 21:09:12.644440  162103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0108 21:09:12.659475  162103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0108 21:09:12.764083  162103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0108 21:09:12.871585  162103 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I0108 21:09:12.871632  162103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0108 21:09:12.886797  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:13.007034  162103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0108 21:09:14.361655  162103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.354584683s)
	I0108 21:09:14.361742  162103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:09:14.467968  162103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0108 21:09:14.568267  162103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0108 21:09:14.685644  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:14.803544  162103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0108 21:09:14.819591  162103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 21:09:14.923224  162103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0108 21:09:15.012599  162103 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0108 21:09:15.012679  162103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0108 21:09:15.019826  162103 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0108 21:09:15.019856  162103 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0108 21:09:15.019867  162103 command_runner.go:130] > Device: 16h/22d	Inode: 877         Links: 1
	I0108 21:09:15.019878  162103 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0108 21:09:15.019886  162103 command_runner.go:130] > Access: 2024-01-08 21:09:14.914444034 +0000
	I0108 21:09:15.019895  162103 command_runner.go:130] > Modify: 2024-01-08 21:09:14.914444034 +0000
	I0108 21:09:15.019905  162103 command_runner.go:130] > Change: 2024-01-08 21:09:14.916446698 +0000
	I0108 21:09:15.019914  162103 command_runner.go:130] >  Birth: -
	I0108 21:09:15.020154  162103 start.go:543] Will wait 60s for crictl version
	I0108 21:09:15.020218  162103 ssh_runner.go:195] Run: which crictl
	I0108 21:09:15.027495  162103 command_runner.go:130] > /usr/bin/crictl
	I0108 21:09:15.027574  162103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 21:09:15.083891  162103 command_runner.go:130] > Version:  0.1.0
	I0108 21:09:15.083915  162103 command_runner.go:130] > RuntimeName:  docker
	I0108 21:09:15.083919  162103 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0108 21:09:15.083924  162103 command_runner.go:130] > RuntimeApiVersion:  v1
	I0108 21:09:15.085296  162103 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0108 21:09:15.085356  162103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:09:15.111017  162103 command_runner.go:130] > 24.0.7
	I0108 21:09:15.112214  162103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0108 21:09:15.135571  162103 command_runner.go:130] > 24.0.7
	I0108 21:09:15.137649  162103 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0108 21:09:15.138950  162103 out.go:177]   - env NO_PROXY=192.168.39.250
	I0108 21:09:15.140307  162103 main.go:141] libmachine: (multinode-472593-m02) Calling .GetIP
	I0108 21:09:15.143233  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:15.143653  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:09:15.143688  162103 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:09:15.143933  162103 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 21:09:15.147999  162103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:09:15.161414  162103 certs.go:56] Setting up /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593 for IP: 192.168.39.225
	I0108 21:09:15.161449  162103 certs.go:190] acquiring lock for shared ca certs: {Name:mkac4a24ed34b812d829a04dcd5630cfa0273c2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 21:09:15.161626  162103 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key
	I0108 21:09:15.161685  162103 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key
	I0108 21:09:15.161699  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0108 21:09:15.161717  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0108 21:09:15.161730  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0108 21:09:15.161742  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0108 21:09:15.161799  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem (1338 bytes)
	W0108 21:09:15.161827  162103 certs.go:433] ignoring /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988_empty.pem, impossibly tiny 0 bytes
	I0108 21:09:15.161837  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca-key.pem (1675 bytes)
	I0108 21:09:15.161858  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/ca.pem (1078 bytes)
	I0108 21:09:15.161882  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/cert.pem (1123 bytes)
	I0108 21:09:15.161904  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/home/jenkins/minikube-integration/17866-142784/.minikube/certs/key.pem (1679 bytes)
	I0108 21:09:15.161945  162103 certs.go:437] found cert: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem (1708 bytes)
	I0108 21:09:15.161972  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:15.161985  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem -> /usr/share/ca-certificates/149988.pem
	I0108 21:09:15.161999  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem -> /usr/share/ca-certificates/1499882.pem
	I0108 21:09:15.162537  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 21:09:15.187111  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0108 21:09:15.211212  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 21:09:15.234941  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 21:09:15.259036  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 21:09:15.282763  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/certs/149988.pem --> /usr/share/ca-certificates/149988.pem (1338 bytes)
	I0108 21:09:15.306794  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/ssl/certs/1499882.pem --> /usr/share/ca-certificates/1499882.pem (1708 bytes)
	I0108 21:09:15.330728  162103 ssh_runner.go:195] Run: openssl version
	I0108 21:09:15.335965  162103 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0108 21:09:15.336285  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 21:09:15.346552  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:15.350921  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan  8 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:15.351230  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 20:51 /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:15.351277  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 21:09:15.356609  162103 command_runner.go:130] > b5213941
	I0108 21:09:15.356829  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 21:09:15.366795  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149988.pem && ln -fs /usr/share/ca-certificates/149988.pem /etc/ssl/certs/149988.pem"
	I0108 21:09:15.376367  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149988.pem
	I0108 21:09:15.380858  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan  8 20:56 /usr/share/ca-certificates/149988.pem
	I0108 21:09:15.381047  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan  8 20:56 /usr/share/ca-certificates/149988.pem
	I0108 21:09:15.381168  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149988.pem
	I0108 21:09:15.386520  162103 command_runner.go:130] > 51391683
	I0108 21:09:15.386820  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/149988.pem /etc/ssl/certs/51391683.0"
	I0108 21:09:15.396493  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1499882.pem && ln -fs /usr/share/ca-certificates/1499882.pem /etc/ssl/certs/1499882.pem"
	I0108 21:09:15.406641  162103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1499882.pem
	I0108 21:09:15.410973  162103 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan  8 20:56 /usr/share/ca-certificates/1499882.pem
	I0108 21:09:15.411228  162103 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan  8 20:56 /usr/share/ca-certificates/1499882.pem
	I0108 21:09:15.411282  162103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1499882.pem
	I0108 21:09:15.416568  162103 command_runner.go:130] > 3ec20f2e
	I0108 21:09:15.416653  162103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1499882.pem /etc/ssl/certs/3ec20f2e.0"
	I0108 21:09:15.425958  162103 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 21:09:15.429955  162103 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:09:15.430254  162103 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 21:09:15.430344  162103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0108 21:09:15.458282  162103 command_runner.go:130] > cgroupfs
	I0108 21:09:15.458967  162103 cni.go:84] Creating CNI manager for ""
	I0108 21:09:15.458982  162103 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:09:15.458992  162103 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 21:09:15.459010  162103 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-472593 NodeName:multinode-472593-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 21:09:15.459127  162103 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-472593-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 21:09:15.459182  162103 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-472593-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 21:09:15.459231  162103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 21:09:15.467787  162103 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0108 21:09:15.467831  162103 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0108 21:09:15.467878  162103 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0108 21:09:15.475879  162103 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0108 21:09:15.475905  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:09:15.475967  162103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0108 21:09:15.475989  162103 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0108 21:09:15.476009  162103 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0108 21:09:15.479809  162103 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:09:15.480137  162103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0108 21:09:15.480163  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0108 21:09:16.138940  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:09:16.139023  162103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0108 21:09:16.143573  162103 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:09:16.143811  162103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0108 21:09:16.143851  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0108 21:09:16.622006  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:09:16.635417  162103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:09:16.635508  162103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0108 21:09:16.639695  162103 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:09:16.639946  162103 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0108 21:09:16.639981  162103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0108 21:09:17.118336  162103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0108 21:09:17.127630  162103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0108 21:09:17.143941  162103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 21:09:17.160178  162103 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0108 21:09:17.164143  162103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.250	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 21:09:17.176800  162103 host.go:66] Checking if "multinode-472593" exists ...
	I0108 21:09:17.177069  162103 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:09:17.177266  162103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:09:17.177315  162103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:09:17.192171  162103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35385
	I0108 21:09:17.192703  162103 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:09:17.193250  162103 main.go:141] libmachine: Using API Version  1
	I0108 21:09:17.193278  162103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:09:17.193630  162103 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:09:17.193854  162103 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:09:17.193995  162103 start.go:304] JoinCluster: &{Name:multinode-472593 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-472593 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 21:09:17.194096  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0108 21:09:17.194117  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:09:17.197077  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:09:17.197470  162103 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:09:17.197493  162103 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:09:17.197629  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:09:17.197851  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:09:17.197987  162103 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:09:17.198118  162103 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:09:17.370487  162103 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rci5yi.0au5gnob4mgnokvi --discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f 
	I0108 21:09:17.370552  162103 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:09:17.370594  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rci5yi.0au5gnob4mgnokvi --discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-472593-m02"
	I0108 21:09:17.412943  162103 command_runner.go:130] ! W0108 21:09:17.393968    1159 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0108 21:09:17.571729  162103 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 21:09:20.281449  162103 command_runner.go:130] > [preflight] Running pre-flight checks
	I0108 21:09:20.281481  162103 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0108 21:09:20.281499  162103 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0108 21:09:20.281512  162103 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 21:09:20.281524  162103 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 21:09:20.281532  162103 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0108 21:09:20.281542  162103 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0108 21:09:20.281555  162103 command_runner.go:130] > This node has joined the cluster:
	I0108 21:09:20.281570  162103 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0108 21:09:20.281594  162103 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0108 21:09:20.281610  162103 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0108 21:09:20.281948  162103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rci5yi.0au5gnob4mgnokvi --discovery-token-ca-cert-hash sha256:d9519d3845afa8ae3d931945f02b04e4d4298af926dc19c200553582e4bd144f --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-472593-m02": (2.911330456s)
	I0108 21:09:20.281977  162103 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0108 21:09:20.435151  162103 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0108 21:09:20.547060  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae minikube.k8s.io/name=multinode-472593 minikube.k8s.io/updated_at=2024_01_08T21_09_20_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 21:09:20.706225  162103 command_runner.go:130] > node/multinode-472593-m02 labeled
	I0108 21:09:20.708073  162103 start.go:306] JoinCluster complete in 3.514072695s
	I0108 21:09:20.708100  162103 cni.go:84] Creating CNI manager for ""
	I0108 21:09:20.708108  162103 cni.go:136] 2 nodes found, recommending kindnet
	I0108 21:09:20.708168  162103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0108 21:09:20.716632  162103 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0108 21:09:20.716661  162103 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0108 21:09:20.716667  162103 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0108 21:09:20.716673  162103 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0108 21:09:20.716679  162103 command_runner.go:130] > Access: 2024-01-08 21:07:43.663177134 +0000
	I0108 21:09:20.716684  162103 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0108 21:09:20.716688  162103 command_runner.go:130] > Change: 2024-01-08 21:07:41.996177134 +0000
	I0108 21:09:20.716692  162103 command_runner.go:130] >  Birth: -
	I0108 21:09:20.716947  162103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0108 21:09:20.716971  162103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0108 21:09:20.752383  162103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0108 21:09:21.117780  162103 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:09:21.124477  162103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0108 21:09:21.127805  162103 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0108 21:09:21.156419  162103 command_runner.go:130] > daemonset.apps/kindnet configured
	I0108 21:09:21.159447  162103 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:09:21.159784  162103 kapi.go:59] client config for multinode-472593: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key", CAFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:09:21.160365  162103 round_trippers.go:463] GET https://192.168.39.250:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0108 21:09:21.160381  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:21.160393  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:21.160403  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:21.167441  162103 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0108 21:09:21.167470  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:21.167480  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:21.167490  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:21.167499  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:21.167525  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:21.167534  162103 round_trippers.go:580]     Content-Length: 291
	I0108 21:09:21.167541  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:21 GMT
	I0108 21:09:21.167550  162103 round_trippers.go:580]     Audit-Id: 10bd58f7-f743-4037-8a6e-62c351a5b423
	I0108 21:09:21.167584  162103 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bf8d61dc-88f8-4920-b261-602e1fccbaff","resourceVersion":"454","creationTimestamp":"2024-01-08T21:08:19Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0108 21:09:21.167699  162103 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-472593" context rescaled to 1 replicas
	I0108 21:09:21.167732  162103 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0108 21:09:21.169608  162103 out.go:177] * Verifying Kubernetes components...
	I0108 21:09:21.170856  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:09:21.185742  162103 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 21:09:21.186157  162103 kapi.go:59] client config for multinode-472593: &rest.Config{Host:"https://192.168.39.250:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.crt", KeyFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/profiles/multinode-472593/client.key", CAFile:"/home/jenkins/minikube-integration/17866-142784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c19800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0108 21:09:21.186577  162103 node_ready.go:35] waiting up to 6m0s for node "multinode-472593-m02" to be "Ready" ...
	I0108 21:09:21.186688  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:21.186702  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:21.186713  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:21.186726  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:21.190241  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:21.190264  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:21.190275  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:21.190284  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:21 GMT
	I0108 21:09:21.190292  162103 round_trippers.go:580]     Audit-Id: ad7ec7f6-f542-4c6d-8dc7-ecbedd73fbfa
	I0108 21:09:21.190300  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:21.190310  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:21.190318  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:21.190446  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:21.687629  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:21.687656  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:21.687667  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:21.687674  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:21.690463  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:21.690482  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:21.690494  162103 round_trippers.go:580]     Audit-Id: 4d0369d9-c82b-42b2-a876-3653d1719aef
	I0108 21:09:21.690502  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:21.690510  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:21.690520  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:21.690529  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:21.690538  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:21 GMT
	I0108 21:09:21.691063  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:22.186998  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:22.187024  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:22.187032  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:22.187038  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:22.191868  162103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:09:22.191909  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:22.191921  162103 round_trippers.go:580]     Audit-Id: 2bccbac5-cd5f-4389-a064-7a0e84ceff8c
	I0108 21:09:22.191929  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:22.191937  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:22.191944  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:22.191952  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:22.191960  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:22 GMT
	I0108 21:09:22.192561  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:22.686999  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:22.687037  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:22.687049  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:22.687059  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:22.690997  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:22.691029  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:22.691040  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:22.691048  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:22.691056  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:22.691064  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:22.691077  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:22 GMT
	I0108 21:09:22.691093  162103 round_trippers.go:580]     Audit-Id: 3cd64310-84b6-4d08-b075-22bbe318d975
	I0108 21:09:22.691381  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:23.187123  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:23.187162  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:23.187174  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:23.187186  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:23.189850  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:23.189880  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:23.189889  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:23.189897  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:23.189904  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:23 GMT
	I0108 21:09:23.189912  162103 round_trippers.go:580]     Audit-Id: 85f290ec-2f3b-4eb8-b1da-1e7cef726bb3
	I0108 21:09:23.189919  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:23.189926  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:23.190233  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:23.190636  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:23.686903  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:23.686927  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:23.686934  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:23.686940  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:23.690360  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:23.690395  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:23.690406  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:23.690415  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:23.690423  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:23.690432  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:23 GMT
	I0108 21:09:23.690440  162103 round_trippers.go:580]     Audit-Id: 371bc350-3b4a-41bb-b2c9-013d68376c8a
	I0108 21:09:23.690448  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:23.690650  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:24.186937  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:24.186969  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:24.186992  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:24.187001  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:24.190211  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:24.190237  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:24.190248  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:24 GMT
	I0108 21:09:24.190258  162103 round_trippers.go:580]     Audit-Id: d143a795-1f4c-459c-b5c3-eb85d8655189
	I0108 21:09:24.190266  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:24.190275  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:24.190283  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:24.190290  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:24.190489  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:24.687092  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:24.687129  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:24.687142  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:24.687152  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:24.691013  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:24.691049  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:24.691061  162103 round_trippers.go:580]     Audit-Id: 7511f241-0bcf-4b5c-ae7f-552f233d9a14
	I0108 21:09:24.691070  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:24.691078  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:24.691086  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:24.691098  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:24.691104  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:24 GMT
	I0108 21:09:24.691417  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:25.186908  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:25.186944  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:25.186956  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:25.186965  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:25.190491  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:25.190522  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:25.190531  162103 round_trippers.go:580]     Audit-Id: c13e336f-465a-409d-b17d-75bd43e59bac
	I0108 21:09:25.190539  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:25.190547  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:25.190554  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:25.190562  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:25.190577  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:25 GMT
	I0108 21:09:25.190815  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:25.191176  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:25.687451  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:25.687473  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:25.687481  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:25.687488  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:25.690550  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:25.690577  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:25.690587  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:25.690600  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:25.690611  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:25.690618  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:25.690628  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:25 GMT
	I0108 21:09:25.690634  162103 round_trippers.go:580]     Audit-Id: b89a05c2-8ac9-4bb9-b80b-ef32e13b2b56
	I0108 21:09:25.690803  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:26.187696  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:26.187729  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:26.187743  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:26.187752  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:26.190403  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:26.190421  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:26.190427  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:26.190435  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:26 GMT
	I0108 21:09:26.190443  162103 round_trippers.go:580]     Audit-Id: a8e9a31f-7bb5-478a-a468-436b568fb0fc
	I0108 21:09:26.190451  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:26.190458  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:26.190466  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:26.190560  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:26.687828  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:26.687852  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:26.687860  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:26.687867  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:26.690685  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:26.690712  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:26.690723  162103 round_trippers.go:580]     Audit-Id: 45df1d46-8816-4667-aa4b-ac858adedb0f
	I0108 21:09:26.690732  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:26.690741  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:26.690750  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:26.690759  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:26.690766  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:26 GMT
	I0108 21:09:26.690843  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:27.187710  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:27.187739  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:27.187756  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:27.187762  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:27.190667  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:27.190697  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:27.190708  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:27.190718  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:27 GMT
	I0108 21:09:27.190727  162103 round_trippers.go:580]     Audit-Id: 05e0c6fb-6382-46e1-af69-d7be9edc04e6
	I0108 21:09:27.190737  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:27.190756  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:27.190762  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:27.191133  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:27.191622  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:27.687543  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:27.687622  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:27.687633  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:27.687642  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:27.691412  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:27.691441  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:27.691451  162103 round_trippers.go:580]     Audit-Id: 0410fe3a-034c-48b3-b32a-7a611b71e879
	I0108 21:09:27.691460  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:27.691469  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:27.691477  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:27.691484  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:27.691493  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:27 GMT
	I0108 21:09:27.692172  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:28.187549  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:28.187585  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:28.187598  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:28.187607  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:28.191283  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:28.191306  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:28.191313  162103 round_trippers.go:580]     Audit-Id: 39428590-6fe9-45fd-a490-38b4c87e6889
	I0108 21:09:28.191318  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:28.191323  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:28.191329  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:28.191336  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:28.191344  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:28 GMT
	I0108 21:09:28.191573  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:28.687166  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:28.687191  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:28.687199  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:28.687205  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:28.690840  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:28.690869  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:28.690881  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:28.690893  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:28.690902  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:28.690911  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:28 GMT
	I0108 21:09:28.690924  162103 round_trippers.go:580]     Audit-Id: 45421ccc-8e1f-42ca-9805-5f90f9270ff7
	I0108 21:09:28.690933  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:28.691292  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:29.187082  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:29.187112  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:29.187120  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:29.187126  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:29.190385  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:29.190410  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:29.190422  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:29.190430  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:29.190437  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:29.190445  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:29.190459  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:29 GMT
	I0108 21:09:29.190467  162103 round_trippers.go:580]     Audit-Id: 8780a5af-5c4b-4e86-9d62-518a7c37df2e
	I0108 21:09:29.190613  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:29.687359  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:29.687388  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:29.687398  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:29.687406  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:29.690068  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:29.690098  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:29.690109  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:29.690117  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:29.690124  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:29.690130  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:29 GMT
	I0108 21:09:29.690137  162103 round_trippers.go:580]     Audit-Id: 0a95fb80-dfd6-49b8-8ef5-93500e006ef1
	I0108 21:09:29.690146  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:29.690335  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"512","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3122 chars]
	I0108 21:09:29.690644  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:30.186856  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:30.186881  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:30.186889  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:30.186895  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:30.190244  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:30.190272  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:30.190283  162103 round_trippers.go:580]     Audit-Id: dc3e71bc-8785-4fc3-9ba4-b59b442b7e53
	I0108 21:09:30.190292  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:30.190300  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:30.190309  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:30.190320  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:30.190327  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:30 GMT
	I0108 21:09:30.190445  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:30.687059  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:30.687091  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:30.687103  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:30.687113  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:30.689965  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:30.689990  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:30.689999  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:30.690006  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:30.690013  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:30.690021  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:30 GMT
	I0108 21:09:30.690028  162103 round_trippers.go:580]     Audit-Id: c8435ad6-5ffb-44a5-91fa-1386937e3219
	I0108 21:09:30.690036  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:30.690211  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:31.187239  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:31.187267  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:31.187278  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:31.187287  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:31.190087  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:31.190116  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:31.190126  162103 round_trippers.go:580]     Audit-Id: ece55bc1-711e-4ef9-9b4e-06cb304af645
	I0108 21:09:31.190134  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:31.190146  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:31.190154  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:31.190163  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:31.190173  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:31 GMT
	I0108 21:09:31.190368  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:31.687650  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:31.687681  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:31.687690  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:31.687696  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:31.690263  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:31.690287  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:31.690296  162103 round_trippers.go:580]     Audit-Id: 89d50739-89d8-4752-89b5-dc65b314a116
	I0108 21:09:31.690302  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:31.690310  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:31.690319  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:31.690328  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:31.690338  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:31 GMT
	I0108 21:09:31.690493  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:31.690754  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:32.187708  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:32.187742  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:32.187753  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:32.187762  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:32.190905  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:32.190927  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:32.190934  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:32 GMT
	I0108 21:09:32.190940  162103 round_trippers.go:580]     Audit-Id: b27ce7ef-47d9-4f18-af57-5d0b63dade8f
	I0108 21:09:32.190945  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:32.190950  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:32.190957  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:32.190964  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:32.191274  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:32.687688  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:32.687714  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:32.687722  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:32.687728  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:32.690576  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:32.690612  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:32.690619  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:32.690625  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:32.690631  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:32.690636  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:32 GMT
	I0108 21:09:32.690643  162103 round_trippers.go:580]     Audit-Id: 0a05e258-bdfc-4a5a-8ca2-b5322cbd6e64
	I0108 21:09:32.690648  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:32.690783  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:33.187510  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:33.187543  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:33.187551  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:33.187569  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:33.190326  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:33.190355  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:33.190365  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:33 GMT
	I0108 21:09:33.190392  162103 round_trippers.go:580]     Audit-Id: 81970d7c-761f-422a-909c-e775eaba4253
	I0108 21:09:33.190407  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:33.190414  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:33.190428  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:33.190436  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:33.190667  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:33.687368  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:33.687394  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:33.687402  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:33.687408  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:33.690169  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:33.690190  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:33.690197  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:33.690211  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:33.690217  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:33 GMT
	I0108 21:09:33.690222  162103 round_trippers.go:580]     Audit-Id: b5495d77-597e-432a-9b5f-1fc8b2e29769
	I0108 21:09:33.690227  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:33.690233  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:33.690417  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:34.186924  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:34.186952  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:34.186961  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:34.186967  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:34.190069  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:34.190103  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:34.190114  162103 round_trippers.go:580]     Audit-Id: ccf226ee-4f91-4c0d-bc46-ae18553b2c0c
	I0108 21:09:34.190120  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:34.190128  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:34.190133  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:34.190138  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:34.190143  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:34 GMT
	I0108 21:09:34.190222  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:34.190465  162103 node_ready.go:58] node "multinode-472593-m02" has status "Ready":"False"
	I0108 21:09:34.686811  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:34.686842  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:34.686855  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:34.686865  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:34.689726  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:34.689754  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:34.689764  162103 round_trippers.go:580]     Audit-Id: 0c85220b-e75c-4911-9e62-0e912d598890
	I0108 21:09:34.689773  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:34.689781  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:34.689789  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:34.689798  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:34.689806  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:34 GMT
	I0108 21:09:34.689924  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"532","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3391 chars]
	I0108 21:09:35.187217  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:35.187242  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.187250  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.187256  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.189922  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.189946  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.189952  162103 round_trippers.go:580]     Audit-Id: 8c14b365-caf0-493b-be63-dff327965de5
	I0108 21:09:35.189958  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.189963  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.189968  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.189973  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.189979  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.190130  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"540","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3257 chars]
	I0108 21:09:35.190375  162103 node_ready.go:49] node "multinode-472593-m02" has status "Ready":"True"
	I0108 21:09:35.190390  162103 node_ready.go:38] duration metric: took 14.003788963s waiting for node "multinode-472593-m02" to be "Ready" ...
	I0108 21:09:35.190398  162103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:09:35.190449  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods
	I0108 21:09:35.190459  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.190465  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.190474  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.194270  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:35.194294  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.194304  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.194312  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.194321  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.194330  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.194338  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.194347  162103 round_trippers.go:580]     Audit-Id: b59af8b6-36e6-42b9-8bf9-9eb045714d95
	I0108 21:09:35.194976  162103 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"541"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"450","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67522 chars]
	I0108 21:09:35.196974  162103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.197045  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wpmbp
	I0108 21:09:35.197053  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.197061  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.197069  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.201461  162103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0108 21:09:35.201479  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.201485  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.201494  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.201503  162103 round_trippers.go:580]     Audit-Id: 054edf73-a35c-4834-952d-070b6ea8eea4
	I0108 21:09:35.201512  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.201521  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.201529  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.201965  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wpmbp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"3dfbd2f3-95c8-4c55-9312-e79187f61d66","resourceVersion":"450","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81566ea4-ce50-4bf7-a009-e76513cef471","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81566ea4-ce50-4bf7-a009-e76513cef471\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0108 21:09:35.202406  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.202422  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.202431  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.202438  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.205494  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:35.205514  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.205523  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.205531  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.205543  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.205550  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.205566  162103 round_trippers.go:580]     Audit-Id: c5833592-7c41-443f-884a-4476a3d6a3ee
	I0108 21:09:35.205576  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.205784  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:35.206151  162103 pod_ready.go:92] pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.206170  162103 pod_ready.go:81] duration metric: took 9.175081ms waiting for pod "coredns-5dd5756b68-wpmbp" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.206185  162103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.206274  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-472593
	I0108 21:09:35.206287  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.206298  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.206308  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.208824  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.208847  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.208857  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.208864  162103 round_trippers.go:580]     Audit-Id: 34175f52-c8db-47bf-b754-1af2df85d0a7
	I0108 21:09:35.208873  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.208880  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.208888  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.208899  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.209065  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-472593","namespace":"kube-system","uid":"48fa98ec-2db1-4f47-9f6b-0a4e7ff632c8","resourceVersion":"377","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.250:2379","kubernetes.io/config.hash":"bbfec1d04b85f774100656f1f492ef89","kubernetes.io/config.mirror":"bbfec1d04b85f774100656f1f492ef89","kubernetes.io/config.seen":"2024-01-08T21:08:19.534831121Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0108 21:09:35.209461  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.209476  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.209483  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.209492  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.211583  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.211601  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.211610  162103 round_trippers.go:580]     Audit-Id: e6e10178-0d6c-4146-a335-baf579d8ee08
	I0108 21:09:35.211618  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.211625  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.211635  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.211649  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.211657  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.211800  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:35.212142  162103 pod_ready.go:92] pod "etcd-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.212161  162103 pod_ready.go:81] duration metric: took 5.964323ms waiting for pod "etcd-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.212180  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.212265  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-472593
	I0108 21:09:35.212274  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.212284  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.212297  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.214222  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:09:35.214239  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.214248  162103 round_trippers.go:580]     Audit-Id: 3994ef8e-5792-48f2-8764-b842918e1769
	I0108 21:09:35.214255  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.214263  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.214272  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.214282  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.214292  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.214443  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-472593","namespace":"kube-system","uid":"fec467b8-a037-4806-8c81-3d53bf2c4bf2","resourceVersion":"426","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.250:8443","kubernetes.io/config.hash":"b179c45695f1bdcc29858d4d51fc6758","kubernetes.io/config.mirror":"b179c45695f1bdcc29858d4d51fc6758","kubernetes.io/config.seen":"2024-01-08T21:08:19.534832719Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0108 21:09:35.214790  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.214801  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.214807  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.214814  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.216877  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.216897  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.216905  162103 round_trippers.go:580]     Audit-Id: d041b8a2-fd7e-48a0-aa4e-584725864373
	I0108 21:09:35.216910  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.216916  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.216921  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.216927  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.216934  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.217206  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:35.217485  162103 pod_ready.go:92] pod "kube-apiserver-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.217499  162103 pod_ready.go:81] duration metric: took 5.308978ms waiting for pod "kube-apiserver-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.217508  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.217552  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-472593
	I0108 21:09:35.217559  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.217566  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.217572  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.219310  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:09:35.219328  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.219336  162103 round_trippers.go:580]     Audit-Id: 3fe82bfa-c3a7-4080-bf58-93115dec535c
	I0108 21:09:35.219343  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.219355  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.219366  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.219374  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.219387  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.219552  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-472593","namespace":"kube-system","uid":"a73873ed-7df0-44f1-82ea-6653d7514a7a","resourceVersion":"424","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6cd028364979f2013eabd2e9e20d2c13","kubernetes.io/config.mirror":"6cd028364979f2013eabd2e9e20d2c13","kubernetes.io/config.seen":"2024-01-08T21:08:19.534833628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0108 21:09:35.219934  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.219948  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.219955  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.219962  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.221797  162103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0108 21:09:35.221810  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.221815  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.221821  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.221829  162103 round_trippers.go:580]     Audit-Id: 83dd3f37-6f48-46c7-9b84-f3d8918a109a
	I0108 21:09:35.221842  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.221851  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.221862  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.222130  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:35.222377  162103 pod_ready.go:92] pod "kube-controller-manager-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.222390  162103 pod_ready.go:81] duration metric: took 4.877016ms waiting for pod "kube-controller-manager-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.222401  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cxgc4" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.387723  162103 request.go:629] Waited for 165.231796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cxgc4
	I0108 21:09:35.387799  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cxgc4
	I0108 21:09:35.387810  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.387822  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.387834  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.390282  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.390302  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.390309  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.390315  162103 round_trippers.go:580]     Audit-Id: 55a4c8ec-f54c-4ad6-8295-2a6c1c61ea5e
	I0108 21:09:35.390320  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.390325  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.390330  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.390337  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.390480  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cxgc4","generateName":"kube-proxy-","namespace":"kube-system","uid":"f3a1181a-74d7-4f25-ae79-a3e3aa07fc4a","resourceVersion":"522","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fbfa7824-80a4-44c3-9492-5116ffb6419b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbfa7824-80a4-44c3-9492-5116ffb6419b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0108 21:09:35.587304  162103 request.go:629] Waited for 196.333133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:35.587370  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593-m02
	I0108 21:09:35.587377  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.587387  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.587399  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.589965  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.589991  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.590001  162103 round_trippers.go:580]     Audit-Id: 1de50887-90cd-4102-b0ce-0fcf566e0605
	I0108 21:09:35.590009  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.590024  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.590032  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.590041  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.590049  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.590493  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593-m02","uid":"c516cef9-507b-4e62-8f82-db33f478bc45","resourceVersion":"540","creationTimestamp":"2024-01-08T21:09:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_08T21_09_20_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:09:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3257 chars]
	I0108 21:09:35.590790  162103 pod_ready.go:92] pod "kube-proxy-cxgc4" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.590808  162103 pod_ready.go:81] duration metric: took 368.395758ms waiting for pod "kube-proxy-cxgc4" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.590819  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4w4g" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.788053  162103 request.go:629] Waited for 197.156365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m4w4g
	I0108 21:09:35.788113  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m4w4g
	I0108 21:09:35.788118  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.788125  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.788131  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.790803  162103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0108 21:09:35.790830  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.790842  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.790851  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.790860  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.790866  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.790872  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.790880  162103 round_trippers.go:580]     Audit-Id: 9959d92b-6f30-4b3f-b270-8f557d670b65
	I0108 21:09:35.791014  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m4w4g","generateName":"kube-proxy-","namespace":"kube-system","uid":"1394b324-16bf-4300-ab4d-443652d36475","resourceVersion":"415","creationTimestamp":"2024-01-08T21:08:31Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fbfa7824-80a4-44c3-9492-5116ffb6419b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fbfa7824-80a4-44c3-9492-5116ffb6419b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0108 21:09:35.987802  162103 request.go:629] Waited for 196.342496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.987860  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:35.987865  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:35.987872  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:35.987881  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:35.991488  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:35.991507  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:35.991514  162103 round_trippers.go:580]     Audit-Id: 46c26d8b-1772-4b17-8f56-509cc8105263
	I0108 21:09:35.991520  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:35.991525  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:35.991530  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:35.991535  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:35.991541  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:35 GMT
	I0108 21:09:35.991634  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:35.991929  162103 pod_ready.go:92] pod "kube-proxy-m4w4g" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:35.991945  162103 pod_ready.go:81] duration metric: took 401.113896ms waiting for pod "kube-proxy-m4w4g" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:35.991955  162103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:36.188316  162103 request.go:629] Waited for 196.287515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-472593
	I0108 21:09:36.188399  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-472593
	I0108 21:09:36.188406  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:36.188416  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:36.188424  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:36.192007  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:36.192031  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:36.192042  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:36.192052  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:36.192058  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:36 GMT
	I0108 21:09:36.192064  162103 round_trippers.go:580]     Audit-Id: 3b439a5e-281f-487d-9db0-48785005632d
	I0108 21:09:36.192074  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:36.192079  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:36.192500  162103 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-472593","namespace":"kube-system","uid":"2e871a08-9d08-4085-a056-3a2daa441ea9","resourceVersion":"425","creationTimestamp":"2024-01-08T21:08:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cc2702e0e8122c22aff19fbe1088d968","kubernetes.io/config.mirror":"cc2702e0e8122c22aff19fbe1088d968","kubernetes.io/config.seen":"2024-01-08T21:08:19.534826671Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-08T21:08:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0108 21:09:36.388312  162103 request.go:629] Waited for 195.364376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:36.388384  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes/multinode-472593
	I0108 21:09:36.388391  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:36.388401  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:36.388410  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:36.391716  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:36.391739  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:36.391746  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:36 GMT
	I0108 21:09:36.391753  162103 round_trippers.go:580]     Audit-Id: cc9c49b6-d189-44c8-8853-608c47281535
	I0108 21:09:36.391762  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:36.391770  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:36.391778  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:36.391786  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:36.391922  162103 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-08T21:08:16Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0108 21:09:36.392318  162103 pod_ready.go:92] pod "kube-scheduler-multinode-472593" in "kube-system" namespace has status "Ready":"True"
	I0108 21:09:36.392333  162103 pod_ready.go:81] duration metric: took 400.371414ms waiting for pod "kube-scheduler-multinode-472593" in "kube-system" namespace to be "Ready" ...
	I0108 21:09:36.392346  162103 pod_ready.go:38] duration metric: took 1.201938205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 21:09:36.392369  162103 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 21:09:36.392421  162103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:09:36.405929  162103 system_svc.go:56] duration metric: took 13.55285ms WaitForService to wait for kubelet.
	I0108 21:09:36.405959  162103 kubeadm.go:581] duration metric: took 15.238195502s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 21:09:36.405980  162103 node_conditions.go:102] verifying NodePressure condition ...
	I0108 21:09:36.587412  162103 request.go:629] Waited for 181.343591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.250:8443/api/v1/nodes
	I0108 21:09:36.587482  162103 round_trippers.go:463] GET https://192.168.39.250:8443/api/v1/nodes
	I0108 21:09:36.587487  162103 round_trippers.go:469] Request Headers:
	I0108 21:09:36.587495  162103 round_trippers.go:473]     Accept: application/json, */*
	I0108 21:09:36.587504  162103 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0108 21:09:36.590533  162103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0108 21:09:36.590560  162103 round_trippers.go:577] Response Headers:
	I0108 21:09:36.590568  162103 round_trippers.go:580]     Audit-Id: 60c3132e-f22f-4350-94bd-9e39ddd151dc
	I0108 21:09:36.590574  162103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0108 21:09:36.590579  162103 round_trippers.go:580]     Content-Type: application/json
	I0108 21:09:36.590584  162103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 34739c57-0bee-4aa4-b4ea-ef42a6b7b910
	I0108 21:09:36.590589  162103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: c7735b76-1e56-4e9b-8a68-7cd6ec32ca56
	I0108 21:09:36.590594  162103 round_trippers.go:580]     Date: Mon, 08 Jan 2024 21:09:36 GMT
	I0108 21:09:36.590853  162103 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"multinode-472593","uid":"5311f085-6636-4fab-b7a4-c8a73588ac4c","resourceVersion":"459","creationTimestamp":"2024-01-08T21:08:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-472593","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae","minikube.k8s.io/name":"multinode-472593","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_08T21_08_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9141 chars]
	I0108 21:09:36.591287  162103 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:09:36.591306  162103 node_conditions.go:123] node cpu capacity is 2
	I0108 21:09:36.591322  162103 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 21:09:36.591329  162103 node_conditions.go:123] node cpu capacity is 2
	I0108 21:09:36.591335  162103 node_conditions.go:105] duration metric: took 185.34958ms to run NodePressure ...
	I0108 21:09:36.591350  162103 start.go:228] waiting for startup goroutines ...
	I0108 21:09:36.591389  162103 start.go:242] writing updated cluster config ...
	I0108 21:09:36.591675  162103 ssh_runner.go:195] Run: rm -f paused
	I0108 21:09:36.639647  162103 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 21:09:36.642842  162103 out.go:177] * Done! kubectl is now configured to use "multinode-472593" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-01-08 21:07:42 UTC, ends at Mon 2024-01-08 21:10:59 UTC. --
	Jan 08 21:08:43 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:43.826411131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:43 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:43.845048952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:08:43 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:43.859578635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:43 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:43.859616838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:08:43 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:43.861713686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:44 multinode-472593 cri-dockerd[1010]: time="2024-01-08T21:08:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a93e4c901a018d197bb6390512572bc131c5490aa89b8a5430b295463083cc18/resolv.conf as [nameserver 192.168.122.1]"
	Jan 08 21:08:44 multinode-472593 cri-dockerd[1010]: time="2024-01-08T21:08:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ce0ad4e0b3d30d0393f19f293ac2e4e5f26fb33e1c63a6f80531f235af4a9479/resolv.conf as [nameserver 192.168.122.1]"
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.445718710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.445827479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.445844559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.445853690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.566474742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.566592807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.566682284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:08:44 multinode-472593 dockerd[1125]: time="2024-01-08T21:08:44.567433236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:09:37 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:37.845099931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:09:37 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:37.845354123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:09:37 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:37.845399456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:09:37 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:37.845426407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:09:38 multinode-472593 cri-dockerd[1010]: time="2024-01-08T21:09:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/59befc33b9b8feb29e6abe1a22313759fd9913bfe5115b30c223c847b46b5a80/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 08 21:09:39 multinode-472593 cri-dockerd[1010]: time="2024-01-08T21:09:39Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 08 21:09:39 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:39.508004714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 08 21:09:39 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:39.508067849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 21:09:39 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:39.508088623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 21:09:39 multinode-472593 dockerd[1125]: time="2024-01-08T21:09:39.508100355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cdb2078211e30       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   59befc33b9b8f       busybox-5bc68d56bd-gp7d2
	9c76d8f5d74f7       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   ce0ad4e0b3d30       storage-provisioner
	46e275bf76154       ead0a4a53df89                                                                                         2 minutes ago        Running             coredns                   0                   a93e4c901a018       coredns-5dd5756b68-wpmbp
	90955ae5f2f45       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              2 minutes ago        Running             kindnet-cni               0                   a722ce28407e5       kindnet-zhh5c
	e4312a76bdf5b       83f6cc407eed8                                                                                         2 minutes ago        Running             kube-proxy                0                   8afea96d34709       kube-proxy-m4w4g
	fec7845e9e4a6       e3db313c6dbc0                                                                                         2 minutes ago        Running             kube-scheduler            0                   22afc775ff4a1       kube-scheduler-multinode-472593
	5917a713cdfad       73deb9a3f7025                                                                                         2 minutes ago        Running             etcd                      0                   a937e6b3462ca       etcd-multinode-472593
	970aa552c28c0       d058aa5ab969c                                                                                         2 minutes ago        Running             kube-controller-manager   0                   b06f199b03c23       kube-controller-manager-multinode-472593
	05835bf9e682c       7fe0e6f37db33                                                                                         2 minutes ago        Running             kube-apiserver            0                   25453dfefa6ff       kube-apiserver-multinode-472593
	
	
	==> coredns [46e275bf7615] <==
	[INFO] 10.244.0.3:34089 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103922s
	[INFO] 10.244.1.2:59872 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130471s
	[INFO] 10.244.1.2:53241 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001744123s
	[INFO] 10.244.1.2:42724 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108364s
	[INFO] 10.244.1.2:47796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007312s
	[INFO] 10.244.1.2:38872 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236799s
	[INFO] 10.244.1.2:35296 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069844s
	[INFO] 10.244.1.2:54071 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091527s
	[INFO] 10.244.1.2:53517 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070185s
	[INFO] 10.244.0.3:35218 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101928s
	[INFO] 10.244.0.3:45824 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068646s
	[INFO] 10.244.0.3:42486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000493s
	[INFO] 10.244.0.3:36227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096339s
	[INFO] 10.244.1.2:57447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128127s
	[INFO] 10.244.1.2:53808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012419s
	[INFO] 10.244.1.2:50905 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000147772s
	[INFO] 10.244.1.2:42650 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106991s
	[INFO] 10.244.0.3:60392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114941s
	[INFO] 10.244.0.3:46298 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149853s
	[INFO] 10.244.0.3:44699 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147881s
	[INFO] 10.244.0.3:35674 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074285s
	[INFO] 10.244.1.2:40886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132681s
	[INFO] 10.244.1.2:50273 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130272s
	[INFO] 10.244.1.2:60839 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000104277s
	[INFO] 10.244.1.2:43295 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088521s
	
	
	==> describe nodes <==
	Name:               multinode-472593
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-472593
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-472593
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T21_08_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:08:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-472593
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:10:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:09:51 +0000   Mon, 08 Jan 2024 21:08:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:09:51 +0000   Mon, 08 Jan 2024 21:08:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:09:51 +0000   Mon, 08 Jan 2024 21:08:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:09:51 +0000   Mon, 08 Jan 2024 21:08:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    multinode-472593
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c54bfeb40d744caace0c70ea7c9cbf9
	  System UUID:                3c54bfeb-40d7-44ca-ace0-c70ea7c9cbf9
	  Boot ID:                    a6572791-4a08-4951-9ad0-417f91eb1590
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-gp7d2                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 coredns-5dd5756b68-wpmbp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m28s
	  kube-system                 etcd-multinode-472593                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m40s
	  kube-system                 kindnet-zhh5c                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m28s
	  kube-system                 kube-apiserver-multinode-472593             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-controller-manager-multinode-472593    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-proxy-m4w4g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kube-scheduler-multinode-472593             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node multinode-472593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node multinode-472593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node multinode-472593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s                  kubelet          Node multinode-472593 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s                  kubelet          Node multinode-472593 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s                  kubelet          Node multinode-472593 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m28s                  node-controller  Node multinode-472593 event: Registered Node multinode-472593 in Controller
	  Normal  NodeReady                2m16s                  kubelet          Node multinode-472593 status is now: NodeReady
	
	
	Name:               multinode-472593-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-472593-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-472593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_10_15_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:09:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-472593-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:10:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:09:50 +0000   Mon, 08 Jan 2024 21:09:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:09:50 +0000   Mon, 08 Jan 2024 21:09:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:09:50 +0000   Mon, 08 Jan 2024 21:09:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:09:50 +0000   Mon, 08 Jan 2024 21:09:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    multinode-472593-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 917e02a74d24420f91d93035d152fe52
	  System UUID:                917e02a7-4d24-420f-91d9-3035d152fe52
	  Boot ID:                    24f7bf66-44c1-48b3-a59f-1d6e778f61ba
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-px9bf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kindnet-t9sz2               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      100s
	  kube-system                 kube-proxy-cxgc4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x5 over 101s)  kubelet          Node multinode-472593-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x5 over 101s)  kubelet          Node multinode-472593-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x5 over 101s)  kubelet          Node multinode-472593-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-472593-m02 event: Registered Node multinode-472593-m02 in Controller
	  Normal  NodeReady                84s                  kubelet          Node multinode-472593-m02 status is now: NodeReady
	
	
	Name:               multinode-472593-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-472593-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3acaa24f4d1a4d3a0ca66bc089ca1776b2f58eae
	                    minikube.k8s.io/name=multinode-472593
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_08T21_10_15_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 21:10:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-472593-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 21:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 21:10:25 +0000   Mon, 08 Jan 2024 21:10:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 21:10:25 +0000   Mon, 08 Jan 2024 21:10:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 21:10:25 +0000   Mon, 08 Jan 2024 21:10:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 21:10:25 +0000   Mon, 08 Jan 2024 21:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    multinode-472593-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b629adad19d4265bec4ed135a753fe2
	  System UUID:                2b629ada-d19d-4265-bec4-ed135a753fe2
	  Boot ID:                    e3787410-6f23-4393-a65b-e4cb4434940d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ft9w5       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      45s
	  kube-system                 kube-proxy-rbxh2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  45s (x5 over 46s)  kubelet          Node multinode-472593-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x5 over 46s)  kubelet          Node multinode-472593-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x5 over 46s)  kubelet          Node multinode-472593-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node multinode-472593-m03 event: Registered Node multinode-472593-m03 in Controller
	  Normal  NodeReady                34s                kubelet          Node multinode-472593-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063097] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.290271] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.750018] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.122153] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.027655] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.163064] systemd-fstab-generator[546]: Ignoring "noauto" for root device
	[  +0.092550] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +1.011414] systemd-fstab-generator[733]: Ignoring "noauto" for root device
	[  +0.282742] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.095642] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.120805] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +1.500675] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.109734] systemd-fstab-generator[965]: Ignoring "noauto" for root device
	[  +0.108530] systemd-fstab-generator[976]: Ignoring "noauto" for root device
	[  +0.101822] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.121985] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
	[Jan 8 21:08] systemd-fstab-generator[1110]: Ignoring "noauto" for root device
	[  +3.983405] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.398223] systemd-fstab-generator[1496]: Ignoring "noauto" for root device
	[  +8.263883] systemd-fstab-generator[2422]: Ignoring "noauto" for root device
	[ +13.766643] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.879778] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5917a713cdfa] <==
	{"level":"info","ts":"2024-01-08T21:08:14.706298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became candidate at term 2"}
	{"level":"info","ts":"2024-01-08T21:08:14.706303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 2"}
	{"level":"info","ts":"2024-01-08T21:08:14.706311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became leader at term 2"}
	{"level":"info","ts":"2024-01-08T21:08:14.706318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 2"}
	{"level":"info","ts":"2024-01-08T21:08:14.708014Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:08:14.709288Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a69e859ffe38fcde","local-member-attributes":"{Name:multinode-472593 ClientURLs:[https://192.168.39.250:2379]}","request-path":"/0/members/a69e859ffe38fcde/attributes","cluster-id":"f7a04275a0bf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-08T21:08:14.709617Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:08:14.709951Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7a04275a0bf31","local-member-id":"a69e859ffe38fcde","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:08:14.710029Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:08:14.710071Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-08T21:08:14.710095Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-08T21:08:14.710102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-08T21:08:14.710107Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-08T21:08:14.710823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-08T21:08:14.711083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.250:2379"}
	{"level":"info","ts":"2024-01-08T21:09:18.550296Z","caller":"traceutil/trace.go:171","msg":"trace[1115014482] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"181.48314ms","start":"2024-01-08T21:09:18.368769Z","end":"2024-01-08T21:09:18.550252Z","steps":["trace[1115014482] 'process raft request'  (duration: 181.320752ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:09:18.551171Z","caller":"traceutil/trace.go:171","msg":"trace[1075747738] linearizableReadLoop","detail":"{readStateIndex:506; appliedIndex:506; }","duration":"181.476447ms","start":"2024-01-08T21:09:18.369669Z","end":"2024-01-08T21:09:18.551146Z","steps":["trace[1075747738] 'read index received'  (duration: 181.472751ms)","trace[1075747738] 'applied index is now lower than readState.Index'  (duration: 3.08µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T21:09:18.551685Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.874809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-rffsn\" ","response":"range_response_count:1 size:1337"}
	{"level":"info","ts":"2024-01-08T21:09:18.551995Z","caller":"traceutil/trace.go:171","msg":"trace[1990838078] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-rffsn; range_end:; response_count:1; response_revision:486; }","duration":"182.338309ms","start":"2024-01-08T21:09:18.369641Z","end":"2024-01-08T21:09:18.55198Z","steps":["trace[1990838078] 'agreement among raft nodes before linearized reading'  (duration: 181.842697ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:09:18.885258Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.626916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T21:09:18.885748Z","caller":"traceutil/trace.go:171","msg":"trace[91677741] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:487; }","duration":"138.180805ms","start":"2024-01-08T21:09:18.747543Z","end":"2024-01-08T21:09:18.885724Z","steps":["trace[91677741] 'range keys from in-memory index tree'  (duration: 137.514283ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:13.458112Z","caller":"traceutil/trace.go:171","msg":"trace[430479285] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"111.495611ms","start":"2024-01-08T21:10:13.346602Z","end":"2024-01-08T21:10:13.458098Z","steps":["trace[430479285] 'process raft request'  (duration: 111.396243ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T21:10:15.748356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.083016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4723"}
	{"level":"info","ts":"2024-01-08T21:10:15.748424Z","caller":"traceutil/trace.go:171","msg":"trace[498007151] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:639; }","duration":"204.168285ms","start":"2024-01-08T21:10:15.544244Z","end":"2024-01-08T21:10:15.748412Z","steps":["trace[498007151] 'range keys from in-memory index tree'  (duration: 203.848693ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T21:10:16.254359Z","caller":"traceutil/trace.go:171","msg":"trace[609712236] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"156.104246ms","start":"2024-01-08T21:10:16.098237Z","end":"2024-01-08T21:10:16.254341Z","steps":["trace[609712236] 'process raft request'  (duration: 155.054928ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:10:59 up 3 min,  0 users,  load average: 0.39, 0.33, 0.14
	Linux multinode-472593 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [90955ae5f2f4] <==
	I0108 21:10:20.038803       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0108 21:10:20.038853       1 main.go:227] handling current node
	I0108 21:10:20.038868       1 main.go:223] Handling node with IPs: map[192.168.39.225:{}]
	I0108 21:10:20.038875       1 main.go:250] Node multinode-472593-m02 has CIDR [10.244.1.0/24] 
	I0108 21:10:20.039446       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0108 21:10:20.039492       1 main.go:250] Node multinode-472593-m03 has CIDR [10.244.2.0/24] 
	I0108 21:10:20.039577       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.70 Flags: [] Table: 0} 
	I0108 21:10:30.047696       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0108 21:10:30.047724       1 main.go:227] handling current node
	I0108 21:10:30.047738       1 main.go:223] Handling node with IPs: map[192.168.39.225:{}]
	I0108 21:10:30.047762       1 main.go:250] Node multinode-472593-m02 has CIDR [10.244.1.0/24] 
	I0108 21:10:30.047921       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0108 21:10:30.047930       1 main.go:250] Node multinode-472593-m03 has CIDR [10.244.2.0/24] 
	I0108 21:10:40.055350       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0108 21:10:40.055378       1 main.go:227] handling current node
	I0108 21:10:40.055391       1 main.go:223] Handling node with IPs: map[192.168.39.225:{}]
	I0108 21:10:40.055397       1 main.go:250] Node multinode-472593-m02 has CIDR [10.244.1.0/24] 
	I0108 21:10:40.055650       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0108 21:10:40.055663       1 main.go:250] Node multinode-472593-m03 has CIDR [10.244.2.0/24] 
	I0108 21:10:50.063662       1 main.go:223] Handling node with IPs: map[192.168.39.250:{}]
	I0108 21:10:50.064138       1 main.go:227] handling current node
	I0108 21:10:50.064348       1 main.go:223] Handling node with IPs: map[192.168.39.225:{}]
	I0108 21:10:50.064442       1 main.go:250] Node multinode-472593-m02 has CIDR [10.244.1.0/24] 
	I0108 21:10:50.064770       1 main.go:223] Handling node with IPs: map[192.168.39.70:{}]
	I0108 21:10:50.064991       1 main.go:250] Node multinode-472593-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [05835bf9e682] <==
	I0108 21:08:16.186678       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0108 21:08:16.196263       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	E0108 21:08:16.201924       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0108 21:08:16.207778       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0108 21:08:16.208727       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0108 21:08:16.210444       1 controller.go:624] quota admission added evaluator for: namespaces
	I0108 21:08:16.211249       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0108 21:08:16.211587       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0108 21:08:16.212023       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0108 21:08:16.405331       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0108 21:08:17.024004       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0108 21:08:17.036831       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0108 21:08:17.036845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0108 21:08:17.738901       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0108 21:08:17.786532       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0108 21:08:17.944129       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0108 21:08:17.954470       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.250]
	I0108 21:08:17.955525       1 controller.go:624] quota admission added evaluator for: endpoints
	I0108 21:08:17.960266       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0108 21:08:18.097968       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0108 21:08:19.358754       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0108 21:08:19.377309       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0108 21:08:19.396488       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0108 21:08:31.564164       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0108 21:08:31.620495       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [970aa552c28c] <==
	I0108 21:09:19.932359       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-472593-m02" podCIDRs=["10.244.1.0/24"]
	I0108 21:09:19.939020       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t9sz2"
	I0108 21:09:19.943275       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cxgc4"
	I0108 21:09:21.016552       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-472593-m02"
	I0108 21:09:21.016646       1 event.go:307] "Event occurred" object="multinode-472593-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-472593-m02 event: Registered Node multinode-472593-m02 in Controller"
	I0108 21:09:35.064913       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-472593-m02"
	I0108 21:09:37.350885       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0108 21:09:37.375047       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-px9bf"
	I0108 21:09:37.392654       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-gp7d2"
	I0108 21:09:37.411287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.731367ms"
	I0108 21:09:37.431050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.618277ms"
	I0108 21:09:37.431243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="70.034µs"
	I0108 21:09:37.435753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.553µs"
	I0108 21:09:40.264574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.765159ms"
	I0108 21:09:40.267131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="2.439059ms"
	I0108 21:09:40.370839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.310317ms"
	I0108 21:09:40.370958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.41µs"
	I0108 21:10:14.744126       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-472593-m02"
	I0108 21:10:14.745373       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-472593-m03\" does not exist"
	I0108 21:10:14.757570       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-472593-m03" podCIDRs=["10.244.2.0/24"]
	I0108 21:10:14.773651       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rbxh2"
	I0108 21:10:14.774635       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ft9w5"
	I0108 21:10:16.036881       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-472593-m03"
	I0108 21:10:16.037267       1 event.go:307] "Event occurred" object="multinode-472593-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-472593-m03 event: Registered Node multinode-472593-m03 in Controller"
	I0108 21:10:25.909060       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-472593-m03"
	
	
	==> kube-proxy [e4312a76bdf5] <==
	I0108 21:08:32.838821       1 server_others.go:69] "Using iptables proxy"
	I0108 21:08:32.856406       1 node.go:141] Successfully retrieved node IP: 192.168.39.250
	I0108 21:08:32.931585       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 21:08:32.931622       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 21:08:32.937235       1 server_others.go:152] "Using iptables Proxier"
	I0108 21:08:32.937295       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 21:08:32.937542       1 server.go:846] "Version info" version="v1.28.4"
	I0108 21:08:32.937570       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 21:08:32.938303       1 config.go:188] "Starting service config controller"
	I0108 21:08:32.938338       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 21:08:32.938367       1 config.go:97] "Starting endpoint slice config controller"
	I0108 21:08:32.938390       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 21:08:32.938903       1 config.go:315] "Starting node config controller"
	I0108 21:08:32.938934       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 21:08:33.039277       1 shared_informer.go:318] Caches are synced for node config
	I0108 21:08:33.039296       1 shared_informer.go:318] Caches are synced for service config
	I0108 21:08:33.039321       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fec7845e9e4a] <==
	W0108 21:08:17.095516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0108 21:08:17.095543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0108 21:08:17.109126       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 21:08:17.109232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 21:08:17.118452       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0108 21:08:17.118584       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0108 21:08:17.199595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 21:08:17.199655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 21:08:17.216731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 21:08:17.216864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 21:08:17.227717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 21:08:17.227979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 21:08:17.283398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 21:08:17.283447       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 21:08:17.378305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 21:08:17.378376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 21:08:17.389156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0108 21:08:17.389302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0108 21:08:17.449680       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 21:08:17.449798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 21:08:17.463629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 21:08:17.463764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 21:08:17.478064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 21:08:17.478340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0108 21:08:18.841124       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 21:07:42 UTC, ends at Mon 2024-01-08 21:11:00 UTC. --
	Jan 08 21:08:31 multinode-472593 kubelet[2440]: I0108 21:08:31.782109    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1394b324-16bf-4300-ab4d-443652d36475-lib-modules\") pod \"kube-proxy-m4w4g\" (UID: \"1394b324-16bf-4300-ab4d-443652d36475\") " pod="kube-system/kube-proxy-m4w4g"
	Jan 08 21:08:31 multinode-472593 kubelet[2440]: I0108 21:08:31.782254    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0452fc75-b53d-4528-a098-bbf6f7f9b197-cni-cfg\") pod \"kindnet-zhh5c\" (UID: \"0452fc75-b53d-4528-a098-bbf6f7f9b197\") " pod="kube-system/kindnet-zhh5c"
	Jan 08 21:08:35 multinode-472593 kubelet[2440]: I0108 21:08:35.560986    2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a722ce28407e5b819117b6553528e75e9a342c8685eafcc31dcf446f40404106"
	Jan 08 21:08:39 multinode-472593 kubelet[2440]: I0108 21:08:39.627138    2440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-m4w4g" podStartSLOduration=8.627101793 podCreationTimestamp="2024-01-08 21:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:08:35.587645044 +0000 UTC m=+16.263955508" watchObservedRunningTime="2024-01-08 21:08:39.627101793 +0000 UTC m=+20.303412257"
	Jan 08 21:08:39 multinode-472593 kubelet[2440]: I0108 21:08:39.671959    2440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zhh5c" podStartSLOduration=5.138224833 podCreationTimestamp="2024-01-08 21:08:31 +0000 UTC" firstStartedPulling="2024-01-08 21:08:35.567465936 +0000 UTC m=+16.243776380" lastFinishedPulling="2024-01-08 21:08:39.101163848 +0000 UTC m=+19.777474293" observedRunningTime="2024-01-08 21:08:39.628420729 +0000 UTC m=+20.304731196" watchObservedRunningTime="2024-01-08 21:08:39.671922746 +0000 UTC m=+20.348233210"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.296918    2440 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.337629    2440 topology_manager.go:215] "Topology Admit Handler" podUID="3dfbd2f3-95c8-4c55-9312-e79187f61d66" podNamespace="kube-system" podName="coredns-5dd5756b68-wpmbp"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.339647    2440 topology_manager.go:215] "Topology Admit Handler" podUID="eb978531-85e2-4a55-8f95-4ff3bc1595c8" podNamespace="kube-system" podName="storage-provisioner"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.369508    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dfbd2f3-95c8-4c55-9312-e79187f61d66-config-volume\") pod \"coredns-5dd5756b68-wpmbp\" (UID: \"3dfbd2f3-95c8-4c55-9312-e79187f61d66\") " pod="kube-system/coredns-5dd5756b68-wpmbp"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.369663    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl7s8\" (UniqueName: \"kubernetes.io/projected/3dfbd2f3-95c8-4c55-9312-e79187f61d66-kube-api-access-jl7s8\") pod \"coredns-5dd5756b68-wpmbp\" (UID: \"3dfbd2f3-95c8-4c55-9312-e79187f61d66\") " pod="kube-system/coredns-5dd5756b68-wpmbp"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.369762    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eb978531-85e2-4a55-8f95-4ff3bc1595c8-tmp\") pod \"storage-provisioner\" (UID: \"eb978531-85e2-4a55-8f95-4ff3bc1595c8\") " pod="kube-system/storage-provisioner"
	Jan 08 21:08:43 multinode-472593 kubelet[2440]: I0108 21:08:43.369906    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdm5\" (UniqueName: \"kubernetes.io/projected/eb978531-85e2-4a55-8f95-4ff3bc1595c8-kube-api-access-fsdm5\") pod \"storage-provisioner\" (UID: \"eb978531-85e2-4a55-8f95-4ff3bc1595c8\") " pod="kube-system/storage-provisioner"
	Jan 08 21:08:45 multinode-472593 kubelet[2440]: I0108 21:08:45.783612    2440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wpmbp" podStartSLOduration=14.783561344 podCreationTimestamp="2024-01-08 21:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:08:44.732018093 +0000 UTC m=+25.408328557" watchObservedRunningTime="2024-01-08 21:08:45.783561344 +0000 UTC m=+26.459871825"
	Jan 08 21:08:45 multinode-472593 kubelet[2440]: I0108 21:08:45.784427    2440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.784395322 podCreationTimestamp="2024-01-08 21:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-08 21:08:45.780939744 +0000 UTC m=+26.457250204" watchObservedRunningTime="2024-01-08 21:08:45.784395322 +0000 UTC m=+26.460705787"
	Jan 08 21:09:19 multinode-472593 kubelet[2440]: E0108 21:09:19.709543    2440 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:09:19 multinode-472593 kubelet[2440]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:09:19 multinode-472593 kubelet[2440]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:09:19 multinode-472593 kubelet[2440]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 08 21:09:37 multinode-472593 kubelet[2440]: I0108 21:09:37.415174    2440 topology_manager.go:215] "Topology Admit Handler" podUID="dbc0447a-0de2-4777-aa91-3b87a1723bf9" podNamespace="default" podName="busybox-5bc68d56bd-gp7d2"
	Jan 08 21:09:37 multinode-472593 kubelet[2440]: I0108 21:09:37.508295    2440 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6zzc\" (UniqueName: \"kubernetes.io/projected/dbc0447a-0de2-4777-aa91-3b87a1723bf9-kube-api-access-t6zzc\") pod \"busybox-5bc68d56bd-gp7d2\" (UID: \"dbc0447a-0de2-4777-aa91-3b87a1723bf9\") " pod="default/busybox-5bc68d56bd-gp7d2"
	Jan 08 21:09:38 multinode-472593 kubelet[2440]: I0108 21:09:38.314333    2440 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59befc33b9b8feb29e6abe1a22313759fd9913bfe5115b30c223c847b46b5a80"
	Jan 08 21:10:19 multinode-472593 kubelet[2440]: E0108 21:10:19.709540    2440 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 08 21:10:19 multinode-472593 kubelet[2440]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 08 21:10:19 multinode-472593 kubelet[2440]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 08 21:10:19 multinode-472593 kubelet[2440]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-472593 -n multinode-472593
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-472593 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.77s)

                                                
                                    

Test pass (294/329)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.35
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 6.79
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 4.39
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.58
27 TestOffline 70.51
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 159.47
34 TestAddons/parallel/Registry 18.28
35 TestAddons/parallel/Ingress 26.12
36 TestAddons/parallel/InspektorGadget 11.8
37 TestAddons/parallel/MetricsServer 6.15
38 TestAddons/parallel/HelmTiller 11.2
40 TestAddons/parallel/CSI 71.71
41 TestAddons/parallel/Headlamp 17.17
42 TestAddons/parallel/CloudSpanner 5.88
43 TestAddons/parallel/LocalPath 64.28
44 TestAddons/parallel/NvidiaDevicePlugin 6.49
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestAddons/StoppedEnableDisable 13.42
50 TestCertOptions 68.36
51 TestCertExpiration 343.3
52 TestDockerFlags 90
53 TestForceSystemdFlag 60.65
54 TestForceSystemdEnv 85.22
56 TestKVMDriverInstallOrUpdate 6.24
60 TestErrorSpam/setup 51.73
61 TestErrorSpam/start 0.39
62 TestErrorSpam/status 0.84
63 TestErrorSpam/pause 1.23
64 TestErrorSpam/unpause 1.4
65 TestErrorSpam/stop 12.54
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 62.49
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 40.31
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.08
76 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
77 TestFunctional/serial/CacheCmd/cache/add_local 1.3
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
85 TestFunctional/serial/ExtraConfig 40.82
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.1
88 TestFunctional/serial/LogsFileCmd 1.12
89 TestFunctional/serial/InvalidService 4.46
91 TestFunctional/parallel/ConfigCmd 0.47
92 TestFunctional/parallel/DashboardCmd 23.51
93 TestFunctional/parallel/DryRun 0.32
94 TestFunctional/parallel/InternationalLanguage 0.16
95 TestFunctional/parallel/StatusCmd 1.08
99 TestFunctional/parallel/ServiceCmdConnect 11.54
100 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/PersistentVolumeClaim 58.48
103 TestFunctional/parallel/SSHCmd 0.53
104 TestFunctional/parallel/CpCmd 1.66
105 TestFunctional/parallel/MySQL 41.53
106 TestFunctional/parallel/FileSync 0.25
107 TestFunctional/parallel/CertSync 1.51
111 TestFunctional/parallel/NodeLabels 0.06
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
115 TestFunctional/parallel/License 0.16
116 TestFunctional/parallel/ServiceCmd/DeployApp 13.25
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
127 TestFunctional/parallel/ProfileCmd/profile_list 0.29
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
129 TestFunctional/parallel/Version/short 0.08
130 TestFunctional/parallel/Version/components 0.78
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
135 TestFunctional/parallel/ImageCommands/ImageBuild 3.78
136 TestFunctional/parallel/ImageCommands/Setup 1.21
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.81
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.54
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.23
140 TestFunctional/parallel/ServiceCmd/List 0.38
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
143 TestFunctional/parallel/ServiceCmd/Format 0.41
144 TestFunctional/parallel/DockerEnv/bash 1.08
145 TestFunctional/parallel/ServiceCmd/URL 0.44
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/parallel/MountCmd/any-port 22
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.09
151 TestFunctional/parallel/ImageCommands/ImageRemove 1.25
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.76
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.27
154 TestFunctional/parallel/MountCmd/specific-port 2.04
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
156 TestFunctional/delete_addon-resizer_images 0.07
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
159 TestGvisorAddon 292.04
162 TestImageBuild/serial/Setup 46.92
163 TestImageBuild/serial/NormalBuild 1.54
164 TestImageBuild/serial/BuildWithBuildArg 1.48
165 TestImageBuild/serial/BuildWithDockerIgnore 0.42
166 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.31
169 TestIngressAddonLegacy/StartLegacyK8sCluster 75.93
171 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.9
172 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
173 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.11
176 TestJSONOutput/start/Command 67.57
177 TestJSONOutput/start/Audit 0
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/pause/Command 0.56
183 TestJSONOutput/pause/Audit 0
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/unpause/Command 0.53
189 TestJSONOutput/unpause/Audit 0
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 8.11
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 0.22
204 TestMainNoArgs 0.06
205 TestMinikubeProfile 101.34
208 TestMountStart/serial/StartWithMountFirst 29.91
209 TestMountStart/serial/VerifyMountFirst 0.4
210 TestMountStart/serial/StartWithMountSecond 29.79
211 TestMountStart/serial/VerifyMountSecond 0.41
212 TestMountStart/serial/DeleteFirst 0.7
213 TestMountStart/serial/VerifyMountPostDelete 0.41
214 TestMountStart/serial/Stop 2.1
215 TestMountStart/serial/RestartStopped 24.65
216 TestMountStart/serial/VerifyMountPostStop 0.4
219 TestMultiNode/serial/FreshStart2Nodes 126.01
220 TestMultiNode/serial/DeployApp2Nodes 4.92
221 TestMultiNode/serial/PingHostFrom2Pods 0.94
222 TestMultiNode/serial/AddNode 45.49
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.22
225 TestMultiNode/serial/CopyFile 7.94
226 TestMultiNode/serial/StopNode 3.34
228 TestMultiNode/serial/RestartKeepsNodes 253.57
229 TestMultiNode/serial/DeleteNode 1.57
230 TestMultiNode/serial/StopMultiNode 25.58
231 TestMultiNode/serial/RestartMultiNode 103.88
232 TestMultiNode/serial/ValidateNameConflict 51.46
237 TestPreload 170.67
239 TestScheduledStopUnix 120.77
240 TestSkaffold 138.83
243 TestRunningBinaryUpgrade 177.94
245 TestKubernetesUpgrade 235.69
266 TestPause/serial/Start 117.63
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
269 TestNoKubernetes/serial/StartWithK8s 66.31
270 TestPause/serial/SecondStartNoReconfiguration 76.16
271 TestNoKubernetes/serial/StartWithStopK8s 38.06
272 TestNoKubernetes/serial/Start 32.04
273 TestStoppedBinaryUpgrade/Setup 0.41
274 TestStoppedBinaryUpgrade/Upgrade 228.49
275 TestPause/serial/Pause 0.61
276 TestPause/serial/VerifyStatus 0.26
277 TestPause/serial/Unpause 0.57
278 TestPause/serial/PauseAgain 0.74
279 TestPause/serial/DeletePaused 0.83
280 TestPause/serial/VerifyDeletedResources 0.25
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
282 TestNoKubernetes/serial/ProfileList 14.52
283 TestNoKubernetes/serial/Stop 2.21
284 TestNoKubernetes/serial/StartNoArgs 37.83
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.52
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
287 TestNetworkPlugins/group/auto/Start 75.67
288 TestNetworkPlugins/group/kindnet/Start 93.62
289 TestNetworkPlugins/group/auto/KubeletFlags 0.26
290 TestNetworkPlugins/group/auto/NetCatPod 12.34
291 TestNetworkPlugins/group/auto/DNS 0.21
292 TestNetworkPlugins/group/auto/Localhost 0.16
293 TestNetworkPlugins/group/auto/HairPin 0.15
294 TestNetworkPlugins/group/calico/Start 109.11
295 TestNetworkPlugins/group/custom-flannel/Start 92.93
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
298 TestNetworkPlugins/group/kindnet/NetCatPod 13.28
299 TestNetworkPlugins/group/kindnet/DNS 0.23
300 TestNetworkPlugins/group/kindnet/Localhost 0.17
301 TestNetworkPlugins/group/kindnet/HairPin 0.18
302 TestNetworkPlugins/group/false/Start 73.96
303 TestNetworkPlugins/group/enable-default-cni/Start 102.98
304 TestNetworkPlugins/group/calico/ControllerPod 6.02
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
307 TestNetworkPlugins/group/calico/KubeletFlags 0.3
308 TestNetworkPlugins/group/calico/NetCatPod 15.58
309 TestNetworkPlugins/group/custom-flannel/DNS 0.25
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
312 TestNetworkPlugins/group/calico/DNS 0.26
313 TestNetworkPlugins/group/calico/Localhost 0.21
314 TestNetworkPlugins/group/calico/HairPin 0.2
315 TestNetworkPlugins/group/flannel/Start 88.03
316 TestNetworkPlugins/group/bridge/Start 95.96
317 TestNetworkPlugins/group/false/KubeletFlags 0.24
318 TestNetworkPlugins/group/false/NetCatPod 11.27
319 TestNetworkPlugins/group/false/DNS 0.18
320 TestNetworkPlugins/group/false/Localhost 0.16
321 TestNetworkPlugins/group/false/HairPin 0.17
322 TestNetworkPlugins/group/kubenet/Start 87.15
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
329 TestStartStop/group/old-k8s-version/serial/FirstStart 161.95
330 TestNetworkPlugins/group/flannel/ControllerPod 5.11
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.52
332 TestNetworkPlugins/group/flannel/NetCatPod 14.94
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
334 TestNetworkPlugins/group/bridge/NetCatPod 12.26
335 TestNetworkPlugins/group/flannel/DNS 0.17
336 TestNetworkPlugins/group/flannel/Localhost 0.17
337 TestNetworkPlugins/group/flannel/HairPin 0.18
338 TestNetworkPlugins/group/bridge/DNS 0.25
339 TestNetworkPlugins/group/bridge/Localhost 0.22
340 TestNetworkPlugins/group/bridge/HairPin 0.19
342 TestStartStop/group/no-preload/serial/FirstStart 94.82
343 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
344 TestNetworkPlugins/group/kubenet/NetCatPod 13.33
346 TestStartStop/group/embed-certs/serial/FirstStart 131.79
347 TestNetworkPlugins/group/kubenet/DNS 0.18
348 TestNetworkPlugins/group/kubenet/Localhost 0.15
349 TestNetworkPlugins/group/kubenet/HairPin 0.16
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.93
352 TestStartStop/group/no-preload/serial/DeployApp 10.46
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
354 TestStartStop/group/no-preload/serial/Stop 13.15
355 TestStartStop/group/old-k8s-version/serial/DeployApp 9.44
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/no-preload/serial/SecondStart 336.33
358 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.96
359 TestStartStop/group/old-k8s-version/serial/Stop 13.16
360 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
361 TestStartStop/group/old-k8s-version/serial/SecondStart 459.48
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
363 TestStartStop/group/embed-certs/serial/DeployApp 9.37
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.16
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
367 TestStartStop/group/embed-certs/serial/Stop 13.17
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 331.99
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
371 TestStartStop/group/embed-certs/serial/SecondStart 326.59
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.01
373 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
375 TestStartStop/group/no-preload/serial/Pause 2.87
377 TestStartStop/group/newest-cni/serial/FirstStart 70.65
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/embed-certs/serial/Pause 2.81
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.46
386 TestStartStop/group/newest-cni/serial/DeployApp 0
387 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
388 TestStartStop/group/newest-cni/serial/Stop 13.13
389 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
390 TestStartStop/group/newest-cni/serial/SecondStart 46.45
391 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
393 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
394 TestStartStop/group/old-k8s-version/serial/Pause 2.58
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
398 TestStartStop/group/newest-cni/serial/Pause 2.36
x
+
TestDownloadOnly/v1.16.0/json-events (8.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (8.345567934s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-357928
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-357928: exit status 85 (73.417436ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:50:12
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:50:12.948505  150000 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:50:12.948739  150000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:12.948748  150000 out.go:309] Setting ErrFile to fd 2...
	I0108 20:50:12.948753  150000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:12.948944  150000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	W0108 20:50:12.949083  150000 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: no such file or directory
	I0108 20:50:12.949745  150000 out.go:303] Setting JSON to true
	I0108 20:50:12.950610  150000 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5544,"bootTime":1704741469,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:50:12.950669  150000 start.go:138] virtualization: kvm guest
	I0108 20:50:12.953145  150000 out.go:97] [download-only-357928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:50:12.954847  150000 out.go:169] MINIKUBE_LOCATION=17866
	W0108 20:50:12.953273  150000 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 20:50:12.953326  150000 notify.go:220] Checking for updates...
	I0108 20:50:12.957859  150000 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:50:12.959452  150000 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 20:50:12.960884  150000 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 20:50:12.962280  150000 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:50:12.964881  150000 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:50:12.965109  150000 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:50:12.999458  150000 out.go:97] Using the kvm2 driver based on user configuration
	I0108 20:50:12.999482  150000 start.go:298] selected driver: kvm2
	I0108 20:50:12.999487  150000 start.go:902] validating driver "kvm2" against <nil>
	I0108 20:50:12.999819  150000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:50:12.999910  150000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-142784/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:50:13.014726  150000 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:50:13.014778  150000 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0108 20:50:13.015223  150000 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0108 20:50:13.015376  150000 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 20:50:13.015437  150000 cni.go:84] Creating CNI manager for ""
	I0108 20:50:13.015453  150000 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0108 20:50:13.015464  150000 start_flags.go:321] config:
	{Name:download-only-357928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-357928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 20:50:13.015672  150000 iso.go:125] acquiring lock: {Name:mke23b0adb82dfaa94b41dcd107f45f9f7011388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:50:13.017709  150000 out.go:97] Downloading VM boot image ...
	I0108 20:50:13.017753  150000 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0108 20:50:16.831768  150000 out.go:97] Starting control plane node download-only-357928 in cluster download-only-357928
	I0108 20:50:16.831790  150000 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 20:50:16.855443  150000 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0108 20:50:16.855480  150000 cache.go:56] Caching tarball of preloaded images
	I0108 20:50:16.855640  150000 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0108 20:50:16.857549  150000 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 20:50:16.857579  150000 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:50:16.885206  150000 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-357928"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (6.788703147s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-357928
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-357928: exit status 85 (77.415981ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:50:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:50:21.369268  150057 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:50:21.369454  150057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:21.369467  150057 out.go:309] Setting ErrFile to fd 2...
	I0108 20:50:21.369475  150057 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:21.369681  150057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	W0108 20:50:21.369832  150057 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: no such file or directory
	I0108 20:50:21.370271  150057 out.go:303] Setting JSON to true
	I0108 20:50:21.371088  150057 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5552,"bootTime":1704741469,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:50:21.371146  150057 start.go:138] virtualization: kvm guest
	I0108 20:50:21.373454  150057 out.go:97] [download-only-357928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:50:21.375324  150057 out.go:169] MINIKUBE_LOCATION=17866
	I0108 20:50:21.373599  150057 notify.go:220] Checking for updates...
	I0108 20:50:21.378420  150057 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:50:21.379890  150057 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 20:50:21.381254  150057 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 20:50:21.382694  150057 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 20:50:21.385558  150057 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 20:50:21.386033  150057 config.go:182] Loaded profile config "download-only-357928": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0108 20:50:21.386077  150057 start.go:810] api.Load failed for download-only-357928: filestore "download-only-357928": Docker machine "download-only-357928" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:50:21.386149  150057 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 20:50:21.386176  150057 start.go:810] api.Load failed for download-only-357928: filestore "download-only-357928": Docker machine "download-only-357928" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 20:50:21.418367  150057 out.go:97] Using the kvm2 driver based on existing profile
	I0108 20:50:21.418394  150057 start.go:298] selected driver: kvm2
	I0108 20:50:21.418399  150057 start.go:902] validating driver "kvm2" against &{Name:download-only-357928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-357928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 20:50:21.418814  150057 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:50:21.418900  150057 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17866-142784/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 20:50:21.433416  150057 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 20:50:21.434177  150057 cni.go:84] Creating CNI manager for ""
	I0108 20:50:21.434200  150057 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0108 20:50:21.434216  150057 start_flags.go:321] config:
	{Name:download-only-357928 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-357928 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 20:50:21.434356  150057 iso.go:125] acquiring lock: {Name:mke23b0adb82dfaa94b41dcd107f45f9f7011388 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 20:50:21.436168  150057 out.go:97] Starting control plane node download-only-357928 in cluster download-only-357928
	I0108 20:50:21.436184  150057 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:50:21.459161  150057 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 20:50:21.459200  150057 cache.go:56] Caching tarball of preloaded images
	I0108 20:50:21.459371  150057 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:50:21.461505  150057 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 20:50:21.461526  150057 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:50:21.491244  150057 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0108 20:50:24.761368  150057 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:50:24.761482  150057 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17866-142784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0108 20:50:25.607090  150057 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0108 20:50:25.607213  150057 profile.go:148] Saving config to /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/download-only-357928/config.json ...
	I0108 20:50:25.607417  150057 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0108 20:50:25.607570  150057 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17866-142784/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-357928"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-357928 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 : (4.392817449s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-357928
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-357928: exit status 85 (72.026299ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-357928 | jenkins | v1.32.0 | 08 Jan 24 20:50 UTC |          |
	|         | -p download-only-357928           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 20:50:28
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 20:50:28.238020  150113 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:50:28.238284  150113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:28.238295  150113 out.go:309] Setting ErrFile to fd 2...
	I0108 20:50:28.238299  150113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:50:28.238477  150113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	W0108 20:50:28.238592  150113 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: open /home/jenkins/minikube-integration/17866-142784/.minikube/config/config.json: no such file or directory
	I0108 20:50:28.239008  150113 out.go:303] Setting JSON to true
	I0108 20:50:28.239830  150113 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5559,"bootTime":1704741469,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:50:28.239890  150113 start.go:138] virtualization: kvm guest
	I0108 20:50:28.242036  150113 out.go:97] [download-only-357928] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:50:28.243631  150113 out.go:169] MINIKUBE_LOCATION=17866
	I0108 20:50:28.242233  150113 notify.go:220] Checking for updates...
	I0108 20:50:28.246399  150113 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:50:28.247908  150113 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 20:50:28.249597  150113 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 20:50:28.251063  150113 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-357928"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-357928
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-140956 --alsologtostderr --binary-mirror http://127.0.0.1:42987 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-140956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-140956
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (70.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-132826 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-132826 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m9.366379514s)
helpers_test.go:175: Cleaning up "offline-docker-132826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-132826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-132826: (1.146729111s)
--- PASS: TestOffline (70.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-188169
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-188169: exit status 85 (61.404284ms)

                                                
                                                
-- stdout --
	* Profile "addons-188169" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-188169"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-188169
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-188169: exit status 85 (62.127062ms)

                                                
                                                
-- stdout --
	* Profile "addons-188169" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-188169"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (159.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-188169 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-188169 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m39.472308412s)
--- PASS: TestAddons/Setup (159.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 25.467629ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-mhfcg" [16c007e5-e03f-45bb-ace9-d36e6d1945ac] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007373492s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kd6pg" [23833488-a4c7-412f-acd7-b0928aaa0522] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.045323134s
addons_test.go:340: (dbg) Run:  kubectl --context addons-188169 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-188169 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-188169 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.328075122s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 ip
2024/01/08 20:53:30 [DEBUG] GET http://192.168.39.64:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.28s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (26.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-188169 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-188169 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-188169 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64900adb-de8e-4d5c-aac0-895635fec80c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [64900adb-de8e-4d5c-aac0-895635fec80c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.003884003s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-188169 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.64
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-188169 addons disable ingress-dns --alsologtostderr -v=1: (2.742127782s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-188169 addons disable ingress --alsologtostderr -v=1: (7.938643603s)
--- PASS: TestAddons/parallel/Ingress (26.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kjjsv" [4e939052-29d0-4c6e-9f41-1fbe77bd1579] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005256193s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-188169
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-188169: (5.788712662s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.218415ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7tcxq" [88455bf1-0514-48ff-ad90-b803e8999d39] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006644114s
addons_test.go:415: (dbg) Run:  kubectl --context addons-188169 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-188169 addons disable metrics-server --alsologtostderr -v=1: (1.07394156s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 23.273798ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-6wqh8" [c9192bba-7caf-4480-83f9-98f79bdf615e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006042807s
addons_test.go:473: (dbg) Run:  kubectl --context addons-188169 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-188169 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.549503886s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (71.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.756722ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-188169 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-188169 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7842c038-513c-467d-a3b7-d74c2eb74454] Pending
helpers_test.go:344: "task-pv-pod" [7842c038-513c-467d-a3b7-d74c2eb74454] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7842c038-513c-467d-a3b7-d74c2eb74454] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.005466206s
addons_test.go:584: (dbg) Run:  kubectl --context addons-188169 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-188169 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-188169 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-188169 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-188169 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-188169 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-188169 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [715b5585-c9df-4456-8b15-7e8266b93b20] Pending
helpers_test.go:344: "task-pv-pod-restore" [715b5585-c9df-4456-8b15-7e8266b93b20] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [715b5585-c9df-4456-8b15-7e8266b93b20] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004420547s
addons_test.go:626: (dbg) Run:  kubectl --context addons-188169 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-188169 delete pod task-pv-pod-restore: (1.227706237s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-188169 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-188169 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-188169 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.674193718s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (71.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-188169 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-188169 --alsologtostderr -v=1: (2.161951309s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-m54sk" [f8ab7ca9-59ab-4106-8a03-51cd14191eb9] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-m54sk" [f8ab7ca9-59ab-4106-8a03-51cd14191eb9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-m54sk" [f8ab7ca9-59ab-4106-8a03-51cd14191eb9] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.00452282s
--- PASS: TestAddons/parallel/Headlamp (17.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-kcxs5" [1b54b67f-fef9-4c99-96c4-2ac8bbde6ff5] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005397666s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-188169
--- PASS: TestAddons/parallel/CloudSpanner (5.88s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (64.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-188169 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-188169 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8d6cb81d-7c12-4ccd-95d7-9456f330ae7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8d6cb81d-7c12-4ccd-95d7-9456f330ae7e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8d6cb81d-7c12-4ccd-95d7-9456f330ae7e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.004329163s
addons_test.go:891: (dbg) Run:  kubectl --context addons-188169 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 ssh "cat /opt/local-path-provisioner/pvc-38f23c73-b76b-42d2-82ec-a414ec71bf26_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-188169 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-188169 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-188169 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-188169 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.396141049s)
--- PASS: TestAddons/parallel/LocalPath (64.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s64x7" [8d0ed233-f7dc-4ce6-9152-389ab34fe49e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005500792s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-188169
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-z95wc" [47fe4205-993f-420f-9570-870860bfed61] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004773424s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-188169 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-188169 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-188169
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-188169: (13.11042141s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-188169
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-188169
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-188169
--- PASS: TestAddons/StoppedEnableDisable (13.42s)

                                                
                                    
x
+
TestCertOptions (68.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-504532 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0108 21:32:25.661449  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-504532 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m5.701027556s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-504532 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-504532 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-504532 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-504532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-504532
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-504532: (1.581226789s)
--- PASS: TestCertOptions (68.36s)

                                                
                                    
x
+
TestCertExpiration (343.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-083258 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-083258 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m42.588495285s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-083258 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-083258 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (59.496408813s)
helpers_test.go:175: Cleaning up "cert-expiration-083258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-083258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-083258: (1.211270139s)
--- PASS: TestCertExpiration (343.30s)

                                                
                                    
x
+
TestDockerFlags (90s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-381243 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-381243 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m28.427809205s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-381243 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-381243 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-381243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-381243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-381243: (1.111663558s)
--- PASS: TestDockerFlags (90.00s)

                                                
                                    
x
+
TestForceSystemdFlag (60.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-439568 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E0108 21:28:54.406657  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-439568 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (59.284604798s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-439568 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-439568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-439568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-439568: (1.082962603s)
--- PASS: TestForceSystemdFlag (60.65s)

                                                
                                    
x
+
TestForceSystemdEnv (85.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-842883 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-842883 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m23.835681511s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-842883 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-842883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-842883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-842883: (1.105098332s)
--- PASS: TestForceSystemdEnv (85.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.24s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (6.24s)

                                                
                                    
x
+
TestErrorSpam/setup (51.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-190449 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-190449 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-190449 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-190449 --driver=kvm2 : (51.731925887s)
--- PASS: TestErrorSpam/setup (51.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (12.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 stop: (12.357233928s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-190449 --log_dir /tmp/nospam-190449 stop
--- PASS: TestErrorSpam/stop (12.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17866-142784/.minikube/files/etc/test/nested/copy/149988/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-733963 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m2.49069005s)
--- PASS: TestFunctional/serial/StartWithProxy (62.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-733963 --alsologtostderr -v=8: (40.304379931s)
functional_test.go:659: soft start took 40.305260965s for "functional-733963" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-733963 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-733963 /tmp/TestFunctionalserialCacheCmdcacheadd_local3930679964/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache add minikube-local-cache-test:functional-733963
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache delete minikube-local-cache-test:functional-733963
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-733963
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (242.340529ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 kubectl -- --context functional-733963 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-733963 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 20:58:13.178709  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.184487  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.194815  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.215087  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.255392  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.335778  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.496239  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:13.816870  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:14.457852  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:15.738109  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:18.299986  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:23.420172  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 20:58:33.661190  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-733963 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.816536656s)
functional_test.go:757: restart took 40.816651279s for "functional-733963" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-733963 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 logs: (1.102100852s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 logs --file /tmp/TestFunctionalserialLogsFileCmd2737847322/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 logs --file /tmp/TestFunctionalserialLogsFileCmd2737847322/001/logs.txt: (1.119422264s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-733963 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-733963
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-733963: exit status 115 (313.825556ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.64:32176 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-733963 delete -f testdata/invalidsvc.yaml
E0108 20:58:54.141359  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 config get cpus: exit status 14 (85.807351ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 config get cpus: exit status 14 (75.383631ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-733963 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-733963 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 156762: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-733963 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (155.096867ms)

                                                
                                                
-- stdout --
	* [functional-733963] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:58:55.867875  155492 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:58:55.867980  155492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:58:55.867986  155492 out.go:309] Setting ErrFile to fd 2...
	I0108 20:58:55.867990  155492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:58:55.868232  155492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 20:58:55.869463  155492 out.go:303] Setting JSON to false
	I0108 20:58:55.870561  155492 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6067,"bootTime":1704741469,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:58:55.870651  155492 start.go:138] virtualization: kvm guest
	I0108 20:58:55.872768  155492 out.go:177] * [functional-733963] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 20:58:55.874524  155492 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 20:58:55.874573  155492 notify.go:220] Checking for updates...
	I0108 20:58:55.877650  155492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:58:55.879000  155492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 20:58:55.880748  155492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 20:58:55.882079  155492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:58:55.883826  155492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:58:55.886116  155492 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:58:55.886665  155492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 20:58:55.886718  155492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:58:55.903095  155492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34485
	I0108 20:58:55.903540  155492 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:58:55.904165  155492 main.go:141] libmachine: Using API Version  1
	I0108 20:58:55.904224  155492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:58:55.904657  155492 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:58:55.904891  155492 main.go:141] libmachine: (functional-733963) Calling .DriverName
	I0108 20:58:55.905153  155492 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:58:55.905594  155492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 20:58:55.905642  155492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:58:55.921566  155492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0108 20:58:55.922005  155492 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:58:55.922507  155492 main.go:141] libmachine: Using API Version  1
	I0108 20:58:55.922534  155492 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:58:55.922948  155492 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:58:55.923135  155492 main.go:141] libmachine: (functional-733963) Calling .DriverName
	I0108 20:58:55.960072  155492 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 20:58:55.961408  155492 start.go:298] selected driver: kvm2
	I0108 20:58:55.961429  155492 start.go:902] validating driver "kvm2" against &{Name:functional-733963 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-733963 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 20:58:55.961585  155492 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:58:55.964047  155492 out.go:177] 
	W0108 20:58:55.965500  155492 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 20:58:55.966785  155492 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-733963 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-733963 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (161.643659ms)

                                                
                                                
-- stdout --
	* [functional-733963] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 20:58:56.196181  155546 out.go:296] Setting OutFile to fd 1 ...
	I0108 20:58:56.196295  155546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:58:56.196299  155546 out.go:309] Setting ErrFile to fd 2...
	I0108 20:58:56.196304  155546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 20:58:56.196572  155546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 20:58:56.197079  155546 out.go:303] Setting JSON to false
	I0108 20:58:56.198088  155546 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6067,"bootTime":1704741469,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 20:58:56.198151  155546 start.go:138] virtualization: kvm guest
	I0108 20:58:56.200081  155546 out.go:177] * [functional-733963] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 20:58:56.201703  155546 out.go:177]   - MINIKUBE_LOCATION=17866
	I0108 20:58:56.201757  155546 notify.go:220] Checking for updates...
	I0108 20:58:56.203165  155546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 20:58:56.204691  155546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	I0108 20:58:56.205945  155546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	I0108 20:58:56.207139  155546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 20:58:56.208322  155546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 20:58:56.209922  155546 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 20:58:56.210289  155546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 20:58:56.210340  155546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:58:56.226203  155546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0108 20:58:56.226614  155546 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:58:56.227287  155546 main.go:141] libmachine: Using API Version  1
	I0108 20:58:56.227323  155546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:58:56.227714  155546 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:58:56.227926  155546 main.go:141] libmachine: (functional-733963) Calling .DriverName
	I0108 20:58:56.228172  155546 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 20:58:56.228698  155546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 20:58:56.228750  155546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 20:58:56.244539  155546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I0108 20:58:56.244957  155546 main.go:141] libmachine: () Calling .GetVersion
	I0108 20:58:56.245517  155546 main.go:141] libmachine: Using API Version  1
	I0108 20:58:56.245540  155546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 20:58:56.245934  155546 main.go:141] libmachine: () Calling .GetMachineName
	I0108 20:58:56.246149  155546 main.go:141] libmachine: (functional-733963) Calling .DriverName
	I0108 20:58:56.282050  155546 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0108 20:58:56.283428  155546 start.go:298] selected driver: kvm2
	I0108 20:58:56.283442  155546 start.go:902] validating driver "kvm2" against &{Name:functional-733963 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1703790982-17866@sha256:b576e790ed1b4dd02d797e8af9f950da6523ba7d8a18c43546b141ba86545d9d Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-733963 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0108 20:58:56.283537  155546 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 20:58:56.285961  155546 out.go:177] 
	W0108 20:58:56.287755  155546 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 20:58:56.289210  155546 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-733963 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-733963 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-2q47n" [acbad83f-5a30-4fdc-a015-fb7f759edf19] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-2q47n" [acbad83f-5a30-4fdc-a015-fb7f759edf19] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005170565s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.64:32088
functional_test.go:1674: http://192.168.39.64:32088: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-2q47n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.64:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.64:32088
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [933f1904-70a1-4d68-839f-08cdb1b1e6dd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006340231s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-733963 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-733963 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-733963 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-733963 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-733963 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d2188c4-ebc5-48b9-a6ba-0580bb15476d] Pending
helpers_test.go:344: "sp-pod" [2d2188c4-ebc5-48b9-a6ba-0580bb15476d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d2188c4-ebc5-48b9-a6ba-0580bb15476d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.160344594s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-733963 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-733963 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-733963 delete -f testdata/storage-provisioner/pod.yaml: (1.610153705s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-733963 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d0a1f717-744b-4eb3-9af8-163262388f8d] Pending
helpers_test.go:344: "sp-pod" [d0a1f717-744b-4eb3-9af8-163262388f8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d0a1f717-744b-4eb3-9af8-163262388f8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.008646792s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-733963 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh -n functional-733963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cp functional-733963:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd348538877/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh -n functional-733963 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh -n functional-733963 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (41.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-733963 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-w9jb4" [259cdc9c-43cb-4cdb-8da2-c1fa3619de5c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-w9jb4" [259cdc9c-43cb-4cdb-8da2-c1fa3619de5c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 34.004499714s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;": exit status 1 (184.815541ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;": exit status 1 (175.019487ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;": exit status 1 (191.041798ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-733963 exec mysql-859648c796-w9jb4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (41.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/149988/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /etc/test/nested/copy/149988/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/149988.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /etc/ssl/certs/149988.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/149988.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /usr/share/ca-certificates/149988.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/1499882.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /etc/ssl/certs/1499882.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/1499882.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /usr/share/ca-certificates/1499882.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-733963 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh "sudo systemctl is-active crio": exit status 1 (220.139432ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-733963 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-733963 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-snmqn" [2fbd1336-3542-4931-8577-69c086b24ec7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-snmqn" [2fbd1336-3542-4931-8577-69c086b24ec7] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.006076352s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "227.866845ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.79536ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "221.276119ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "65.219171ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-733963 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-733963
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-733963
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-733963 image ls --format short --alsologtostderr:
I0108 20:59:34.176427  157446 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:34.176698  157446 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.176708  157446 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:34.176713  157446 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.176896  157446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 20:59:34.177509  157446 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.177611  157446 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.178018  157446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.178060  157446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.193084  157446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33207
I0108 20:59:34.193590  157446 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.194229  157446 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.194257  157446 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.194606  157446 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.194794  157446 main.go:141] libmachine: (functional-733963) Calling .GetState
I0108 20:59:34.196733  157446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.196783  157446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.211570  157446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
I0108 20:59:34.212019  157446 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.212495  157446 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.212519  157446 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.212885  157446 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.213083  157446 main.go:141] libmachine: (functional-733963) Calling .DriverName
I0108 20:59:34.213258  157446 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:34.213286  157446 main.go:141] libmachine: (functional-733963) Calling .GetSSHHostname
I0108 20:59:34.216020  157446 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.216475  157446 main.go:141] libmachine: (functional-733963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:30:bc", ip: ""} in network mk-functional-733963: {Iface:virbr1 ExpiryTime:2024-01-08 21:56:33 +0000 UTC Type:0 Mac:52:54:00:4b:30:bc Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:functional-733963 Clientid:01:52:54:00:4b:30:bc}
I0108 20:59:34.216505  157446 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined IP address 192.168.39.64 and MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.216642  157446 main.go:141] libmachine: (functional-733963) Calling .GetSSHPort
I0108 20:59:34.216853  157446 main.go:141] libmachine: (functional-733963) Calling .GetSSHKeyPath
I0108 20:59:34.217007  157446 main.go:141] libmachine: (functional-733963) Calling .GetSSHUsername
I0108 20:59:34.217136  157446 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/functional-733963/id_rsa Username:docker}
I0108 20:59:34.383214  157446 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0108 20:59:34.417326  157446 main.go:141] libmachine: Making call to close driver server
I0108 20:59:34.417342  157446 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:34.417626  157446 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:34.417644  157446 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:34.417653  157446 main.go:141] libmachine: Making call to close driver server
I0108 20:59:34.417662  157446 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:34.418017  157446 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:34.418041  157446 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:34.418030  157446 main.go:141] libmachine: (functional-733963) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-733963 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-733963 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-733963 | eef002204f57a | 30B    |
| docker.io/library/nginx                     | latest            | d453dd892d935 | 187MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-733963 image ls --format table --alsologtostderr:
I0108 20:59:35.666863  157672 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:35.667143  157672 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:35.667154  157672 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:35.667159  157672 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:35.667352  157672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 20:59:35.667914  157672 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:35.668009  157672 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:35.668418  157672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:35.668460  157672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:35.683192  157672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
I0108 20:59:35.683665  157672 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:35.684224  157672 main.go:141] libmachine: Using API Version  1
I0108 20:59:35.684247  157672 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:35.684596  157672 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:35.684795  157672 main.go:141] libmachine: (functional-733963) Calling .GetState
I0108 20:59:35.686736  157672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:35.686779  157672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:35.701949  157672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
I0108 20:59:35.702399  157672 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:35.702960  157672 main.go:141] libmachine: Using API Version  1
I0108 20:59:35.702991  157672 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:35.703369  157672 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:35.703586  157672 main.go:141] libmachine: (functional-733963) Calling .DriverName
I0108 20:59:35.703794  157672 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:35.703853  157672 main.go:141] libmachine: (functional-733963) Calling .GetSSHHostname
I0108 20:59:35.707299  157672 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:35.707915  157672 main.go:141] libmachine: (functional-733963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:30:bc", ip: ""} in network mk-functional-733963: {Iface:virbr1 ExpiryTime:2024-01-08 21:56:33 +0000 UTC Type:0 Mac:52:54:00:4b:30:bc Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:functional-733963 Clientid:01:52:54:00:4b:30:bc}
I0108 20:59:35.707947  157672 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined IP address 192.168.39.64 and MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:35.708168  157672 main.go:141] libmachine: (functional-733963) Calling .GetSSHPort
I0108 20:59:35.708368  157672 main.go:141] libmachine: (functional-733963) Calling .GetSSHKeyPath
I0108 20:59:35.708562  157672 main.go:141] libmachine: (functional-733963) Calling .GetSSHUsername
I0108 20:59:35.708728  157672 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/functional-733963/id_rsa Username:docker}
I0108 20:59:35.795021  157672 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0108 20:59:35.832643  157672 main.go:141] libmachine: Making call to close driver server
I0108 20:59:35.832668  157672 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:35.832964  157672 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:35.832983  157672 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:35.832992  157672 main.go:141] libmachine: (functional-733963) DBG | Closing plugin on server side
I0108 20:59:35.832999  157672 main.go:141] libmachine: Making call to close driver server
I0108 20:59:35.833010  157672 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:35.833274  157672 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:35.833294  157672 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-733963 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDiges
ts":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"eef002204f57a5991e0a3ffb5095948e6089be834007e7aca6f914f65815145a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-733963"],"size":"30"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests
":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-733963"],"size":"32900000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-733963 image ls --format json --alsologtostderr:
I0108 20:59:35.421694  157649 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:35.421861  157649 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:35.421873  157649 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:35.421879  157649 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:35.422138  157649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 20:59:35.422747  157649 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:35.422890  157649 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:35.423292  157649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:35.423334  157649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:35.438130  157649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
I0108 20:59:35.438567  157649 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:35.439146  157649 main.go:141] libmachine: Using API Version  1
I0108 20:59:35.439171  157649 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:35.439537  157649 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:35.439743  157649 main.go:141] libmachine: (functional-733963) Calling .GetState
I0108 20:59:35.441688  157649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:35.441733  157649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:35.456303  157649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
I0108 20:59:35.456711  157649 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:35.457215  157649 main.go:141] libmachine: Using API Version  1
I0108 20:59:35.457240  157649 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:35.457601  157649 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:35.457775  157649 main.go:141] libmachine: (functional-733963) Calling .DriverName
I0108 20:59:35.458007  157649 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:35.458038  157649 main.go:141] libmachine: (functional-733963) Calling .GetSSHHostname
I0108 20:59:35.460913  157649 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:35.461306  157649 main.go:141] libmachine: (functional-733963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:30:bc", ip: ""} in network mk-functional-733963: {Iface:virbr1 ExpiryTime:2024-01-08 21:56:33 +0000 UTC Type:0 Mac:52:54:00:4b:30:bc Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:functional-733963 Clientid:01:52:54:00:4b:30:bc}
I0108 20:59:35.461346  157649 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined IP address 192.168.39.64 and MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:35.461515  157649 main.go:141] libmachine: (functional-733963) Calling .GetSSHPort
I0108 20:59:35.461695  157649 main.go:141] libmachine: (functional-733963) Calling .GetSSHKeyPath
I0108 20:59:35.461863  157649 main.go:141] libmachine: (functional-733963) Calling .GetSSHUsername
I0108 20:59:35.461998  157649 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/functional-733963/id_rsa Username:docker}
I0108 20:59:35.559569  157649 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0108 20:59:35.603622  157649 main.go:141] libmachine: Making call to close driver server
I0108 20:59:35.603636  157649 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:35.603969  157649 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:35.603997  157649 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:35.604006  157649 main.go:141] libmachine: Making call to close driver server
I0108 20:59:35.604014  157649 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:35.604283  157649 main.go:141] libmachine: (functional-733963) DBG | Closing plugin on server side
I0108 20:59:35.604298  157649 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:35.604310  157649 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-733963 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-733963
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: eef002204f57a5991e0a3ffb5095948e6089be834007e7aca6f914f65815145a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-733963
size: "30"
- id: d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-733963 image ls --format yaml --alsologtostderr:
I0108 20:59:34.482677  157471 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:34.482838  157471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.482851  157471 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:34.482860  157471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.483163  157471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 20:59:34.483929  157471 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.484100  157471 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.484624  157471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.484690  157471 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.498958  157471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
I0108 20:59:34.499407  157471 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.499961  157471 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.499988  157471 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.500343  157471 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.500544  157471 main.go:141] libmachine: (functional-733963) Calling .GetState
I0108 20:59:34.502544  157471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.502581  157471 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.517627  157471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
I0108 20:59:34.518299  157471 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.518848  157471 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.518876  157471 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.519183  157471 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.519350  157471 main.go:141] libmachine: (functional-733963) Calling .DriverName
I0108 20:59:34.519566  157471 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:34.519607  157471 main.go:141] libmachine: (functional-733963) Calling .GetSSHHostname
I0108 20:59:34.523077  157471 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.523800  157471 main.go:141] libmachine: (functional-733963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:30:bc", ip: ""} in network mk-functional-733963: {Iface:virbr1 ExpiryTime:2024-01-08 21:56:33 +0000 UTC Type:0 Mac:52:54:00:4b:30:bc Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:functional-733963 Clientid:01:52:54:00:4b:30:bc}
I0108 20:59:34.523841  157471 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined IP address 192.168.39.64 and MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.523881  157471 main.go:141] libmachine: (functional-733963) Calling .GetSSHPort
I0108 20:59:34.524102  157471 main.go:141] libmachine: (functional-733963) Calling .GetSSHKeyPath
I0108 20:59:34.524264  157471 main.go:141] libmachine: (functional-733963) Calling .GetSSHUsername
I0108 20:59:34.524426  157471 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/functional-733963/id_rsa Username:docker}
I0108 20:59:34.616728  157471 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0108 20:59:34.661681  157471 main.go:141] libmachine: Making call to close driver server
I0108 20:59:34.661711  157471 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:34.662044  157471 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:34.662076  157471 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:34.662075  157471 main.go:141] libmachine: (functional-733963) DBG | Closing plugin on server side
I0108 20:59:34.662102  157471 main.go:141] libmachine: Making call to close driver server
I0108 20:59:34.662117  157471 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:34.662352  157471 main.go:141] libmachine: (functional-733963) DBG | Closing plugin on server side
I0108 20:59:34.662384  157471 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:34.662400  157471 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh pgrep buildkitd: exit status 1 (222.574529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image build -t localhost/my-image:functional-733963 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image build -t localhost/my-image:functional-733963 testdata/build --alsologtostderr: (3.34301114s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-733963 image build -t localhost/my-image:functional-733963 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 26a1b688621c
Removing intermediate container 26a1b688621c
---> 07f6473041db
Step 3/3 : ADD content.txt /
---> ef9765387af1
Successfully built ef9765387af1
Successfully tagged localhost/my-image:functional-733963
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-733963 image build -t localhost/my-image:functional-733963 testdata/build --alsologtostderr:
I0108 20:59:34.953261  157584 out.go:296] Setting OutFile to fd 1 ...
I0108 20:59:34.953589  157584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.953600  157584 out.go:309] Setting ErrFile to fd 2...
I0108 20:59:34.953604  157584 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 20:59:34.953803  157584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
I0108 20:59:34.954372  157584 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.954841  157584 config.go:182] Loaded profile config "functional-733963": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0108 20:59:34.955218  157584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.955251  157584 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.970196  157584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
I0108 20:59:34.970667  157584 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.971331  157584 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.971355  157584 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.971793  157584 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.972024  157584 main.go:141] libmachine: (functional-733963) Calling .GetState
I0108 20:59:34.974193  157584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0108 20:59:34.974245  157584 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 20:59:34.991085  157584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
I0108 20:59:34.991520  157584 main.go:141] libmachine: () Calling .GetVersion
I0108 20:59:34.992085  157584 main.go:141] libmachine: Using API Version  1
I0108 20:59:34.992114  157584 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 20:59:34.992468  157584 main.go:141] libmachine: () Calling .GetMachineName
I0108 20:59:34.992676  157584 main.go:141] libmachine: (functional-733963) Calling .DriverName
I0108 20:59:34.992908  157584 ssh_runner.go:195] Run: systemctl --version
I0108 20:59:34.992937  157584 main.go:141] libmachine: (functional-733963) Calling .GetSSHHostname
I0108 20:59:34.996457  157584 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.996927  157584 main.go:141] libmachine: (functional-733963) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:30:bc", ip: ""} in network mk-functional-733963: {Iface:virbr1 ExpiryTime:2024-01-08 21:56:33 +0000 UTC Type:0 Mac:52:54:00:4b:30:bc Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:functional-733963 Clientid:01:52:54:00:4b:30:bc}
I0108 20:59:34.996973  157584 main.go:141] libmachine: (functional-733963) DBG | domain functional-733963 has defined IP address 192.168.39.64 and MAC address 52:54:00:4b:30:bc in network mk-functional-733963
I0108 20:59:34.997035  157584 main.go:141] libmachine: (functional-733963) Calling .GetSSHPort
I0108 20:59:34.997191  157584 main.go:141] libmachine: (functional-733963) Calling .GetSSHKeyPath
I0108 20:59:34.997358  157584 main.go:141] libmachine: (functional-733963) Calling .GetSSHUsername
I0108 20:59:34.997492  157584 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/functional-733963/id_rsa Username:docker}
I0108 20:59:35.089706  157584 build_images.go:151] Building image from path: /tmp/build.2168098399.tar
I0108 20:59:35.089777  157584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 20:59:35.102239  157584 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2168098399.tar
I0108 20:59:35.107050  157584 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2168098399.tar: stat -c "%s %y" /var/lib/minikube/build/build.2168098399.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2168098399.tar': No such file or directory
I0108 20:59:35.107081  157584 ssh_runner.go:362] scp /tmp/build.2168098399.tar --> /var/lib/minikube/build/build.2168098399.tar (3072 bytes)
I0108 20:59:35.131944  157584 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2168098399
I0108 20:59:35.144372  157584 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2168098399 -xf /var/lib/minikube/build/build.2168098399.tar
I0108 20:59:35.153378  157584 docker.go:346] Building image: /var/lib/minikube/build/build.2168098399
I0108 20:59:35.153459  157584 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-733963 /var/lib/minikube/build/build.2168098399
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0108 20:59:38.207758  157584 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-733963 /var/lib/minikube/build/build.2168098399: (3.054270177s)
I0108 20:59:38.207814  157584 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2168098399
I0108 20:59:38.218309  157584 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2168098399.tar
I0108 20:59:38.228030  157584 build_images.go:207] Built localhost/my-image:functional-733963 from /tmp/build.2168098399.tar
I0108 20:59:38.228070  157584 build_images.go:123] succeeded building to: functional-733963
I0108 20:59:38.228074  157584 build_images.go:124] failed building to: 
I0108 20:59:38.228140  157584 main.go:141] libmachine: Making call to close driver server
I0108 20:59:38.228158  157584 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:38.228459  157584 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:38.228473  157584 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 20:59:38.228481  157584 main.go:141] libmachine: Making call to close driver server
I0108 20:59:38.228488  157584 main.go:141] libmachine: (functional-733963) Calling .Close
I0108 20:59:38.228698  157584 main.go:141] libmachine: Successfully made call to close driver server
I0108 20:59:38.228716  157584 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.189309994s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-733963
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr: (3.595231598s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr: (2.295213765s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.173058224s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-733963
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image load --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr: (4.70736234s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service list -o json
functional_test.go:1493: Took "316.915151ms" to run "out/minikube-linux-amd64 -p functional-733963 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.64:30808
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-733963 docker-env) && out/minikube-linux-amd64 status -p functional-733963"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-733963 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.64:30808
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdany-port1626261073/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704747549832043054" to /tmp/TestFunctionalparallelMountCmdany-port1626261073/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704747549832043054" to /tmp/TestFunctionalparallelMountCmdany-port1626261073/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704747549832043054" to /tmp/TestFunctionalparallelMountCmdany-port1626261073/001/test-1704747549832043054
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.490461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 20:59 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 20:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 20:59 test-1704747549832043054
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh cat /mount-9p/test-1704747549832043054
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-733963 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [538be083-0926-4fbe-a229-c61e20ab8565] Pending
helpers_test.go:344: "busybox-mount" [538be083-0926-4fbe-a229-c61e20ab8565] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [538be083-0926-4fbe-a229-c61e20ab8565] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [538be083-0926-4fbe-a229-c61e20ab8565] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.004983415s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-733963 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdany-port1626261073/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image save gcr.io/google-containers/addon-resizer:functional-733963 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image save gcr.io/google-containers/addon-resizer:functional-733963 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.093840745s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image rm gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.515265863s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-733963
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 image save --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-733963 image save --daemon gcr.io/google-containers/addon-resizer:functional-733963 --alsologtostderr: (1.241369939s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-733963
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdspecific-port229563481/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.775596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/01/08 20:59:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdspecific-port229563481/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh "sudo umount -f /mount-9p": exit status 1 (224.268204ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-733963 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdspecific-port229563481/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T" /mount1: exit status 1 (309.734134ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-733963 ssh "findmnt -T" /mount3
E0108 20:59:35.102077  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-733963 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-733963 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1821185749/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-733963
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-733963
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-733963
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (292.04s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-282905 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-282905 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m14.365592163s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-282905 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0108 21:30:59.496651  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-282905 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.285778174s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-282905 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-282905 addons enable gvisor: (5.324947149s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [de6b5cf7-80d9-4505-a9b0-b75453adde42] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.007139026s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-282905 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
E0108 21:31:16.224503  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [0e5c12f7-4129-4d6b-a983-c39055704755] Pending
helpers_test.go:344: "nginx-gvisor" [0e5c12f7-4129-4d6b-a983-c39055704755] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [0e5c12f7-4129-4d6b-a983-c39055704755] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.005842061s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-282905
E0108 21:31:40.457427  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-282905: (1m32.041033482s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-282905 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0108 21:33:02.378449  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:33:13.178382  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-282905 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m4.129784306s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [de6b5cf7-80d9-4505-a9b0-b75453adde42] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.0094834s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [0e5c12f7-4129-4d6b-a983-c39055704755] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.005393483s
helpers_test.go:175: Cleaning up "gvisor-282905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-282905
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-282905: (1.245543746s)
--- PASS: TestGvisorAddon (292.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (46.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-398031 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-398031 --driver=kvm2 : (46.919439218s)
--- PASS: TestImageBuild/serial/Setup (46.92s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-398031
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-398031: (1.544699773s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-398031
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-398031: (1.483442443s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-398031
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.42s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-398031
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-183510 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0108 21:00:57.022605  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-183510 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m15.931758816s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.9s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons enable ingress --alsologtostderr -v=5: (16.899370896s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.90s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-183510 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-183510 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.819668864s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-183510 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-183510 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a2244917-9338-4c5f-a90e-a3bf91071a34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a2244917-9338-4c5f-a90e-a3bf91071a34] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.004115095s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-183510 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.5
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons disable ingress-dns --alsologtostderr -v=1: (2.608367522s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-183510 addons disable ingress --alsologtostderr -v=1: (7.502464621s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-585027 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0108 21:03:13.178657  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:03:40.862897  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:03:54.406900  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.412233  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.422530  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.442812  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.483136  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.563520  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:54.724020  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:55.044653  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:55.685649  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:56.966316  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:03:59.528089  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:04:04.648365  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-585027 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m7.564978922s)
--- PASS: TestJSONOutput/start/Command (67.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-585027 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-585027 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-585027 --output=json --user=testUser
E0108 21:04:14.889575  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-585027 --output=json --user=testUser: (8.104953043s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-686216 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-686216 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.090384ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de755b97-2881-4783-942a-575f4b3680b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-686216] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4d32aa4-e5e9-4036-b0bb-5867a9791858","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17866"}}
	{"specversion":"1.0","id":"501e7bc5-3798-4f8e-9e6f-2b6427b6f964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"57f6f729-a2d9-409b-b048-7e760252642c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig"}}
	{"specversion":"1.0","id":"eeccf45e-cfcb-402b-b3c1-936c7ba8170f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube"}}
	{"specversion":"1.0","id":"41de7904-22a2-4ab7-afae-0fee8f1a1af3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9f4d7783-0dfa-4e40-a891-a5f463ed4c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"febc2677-d24b-40b0-a166-4c46ad49eec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-686216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-686216
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (101.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-951854 --driver=kvm2 
E0108 21:04:35.370666  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-951854 --driver=kvm2 : (49.720501965s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-955131 --driver=kvm2 
E0108 21:05:16.331690  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-955131 --driver=kvm2 : (48.956531747s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-951854
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-955131
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-955131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-955131
helpers_test.go:175: Cleaning up "first-951854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-951854
--- PASS: TestMinikubeProfile (101.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-363310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-363310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.910652423s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-363310 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-363310 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381963 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0108 21:06:38.253539  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381963 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.790956984s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-363310 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-381963
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-381963: (2.095205548s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-381963
E0108 21:07:25.661288  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.666531  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.676803  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.697347  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.737675  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.818113  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:25.978544  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:26.299171  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:26.939516  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:28.219812  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-381963: (23.650454864s)
--- PASS: TestMountStart/serial/RestartStopped (24.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-381963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-472593 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0108 21:07:35.901895  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:07:46.142622  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:08:06.623424  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:08:13.178618  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:08:47.584639  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:08:54.406428  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:09:22.094624  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-472593 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m5.57502991s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-472593 -- rollout status deployment/busybox: (3.040330496s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-gp7d2 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-px9bf -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-gp7d2 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-px9bf -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-gp7d2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-px9bf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-gp7d2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-gp7d2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-px9bf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-472593 -- exec busybox-5bc68d56bd-px9bf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-472593 -v 3 --alsologtostderr
E0108 21:10:09.505435  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-472593 -v 3 --alsologtostderr: (44.884754296s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.49s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-472593 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp testdata/cp-test.txt multinode-472593:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2621877827/001/cp-test_multinode-472593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593:/home/docker/cp-test.txt multinode-472593-m02:/home/docker/cp-test_multinode-472593_multinode-472593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test_multinode-472593_multinode-472593-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593:/home/docker/cp-test.txt multinode-472593-m03:/home/docker/cp-test_multinode-472593_multinode-472593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test_multinode-472593_multinode-472593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp testdata/cp-test.txt multinode-472593-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2621877827/001/cp-test_multinode-472593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt multinode-472593:/home/docker/cp-test_multinode-472593-m02_multinode-472593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test_multinode-472593-m02_multinode-472593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m02:/home/docker/cp-test.txt multinode-472593-m03:/home/docker/cp-test_multinode-472593-m02_multinode-472593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test_multinode-472593-m02_multinode-472593-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp testdata/cp-test.txt multinode-472593-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2621877827/001/cp-test_multinode-472593-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt multinode-472593:/home/docker/cp-test_multinode-472593-m03_multinode-472593.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593 "sudo cat /home/docker/cp-test_multinode-472593-m03_multinode-472593.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 cp multinode-472593-m03:/home/docker/cp-test.txt multinode-472593-m02:/home/docker/cp-test_multinode-472593-m03_multinode-472593-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 ssh -n multinode-472593-m02 "sudo cat /home/docker/cp-test_multinode-472593-m03_multinode-472593-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-472593 node stop m03: (2.426468538s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 status: exit status 7 (455.272481ms)

                                                
                                                
-- stdout --
	multinode-472593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472593-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472593-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr: exit status 7 (454.845231ms)

                                                
                                                
-- stdout --
	multinode-472593
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472593-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472593-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:10:39.591786  164649 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:10:39.592041  164649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:10:39.592050  164649 out.go:309] Setting ErrFile to fd 2...
	I0108 21:10:39.592055  164649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:10:39.592216  164649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 21:10:39.592364  164649 out.go:303] Setting JSON to false
	I0108 21:10:39.592403  164649 mustload.go:65] Loading cluster: multinode-472593
	I0108 21:10:39.592502  164649 notify.go:220] Checking for updates...
	I0108 21:10:39.592843  164649 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:10:39.592861  164649 status.go:255] checking status of multinode-472593 ...
	I0108 21:10:39.593217  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.593281  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.613183  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0108 21:10:39.613654  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.614249  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.614275  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.614614  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.614790  164649 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:10:39.616225  164649 status.go:330] multinode-472593 host status = "Running" (err=<nil>)
	I0108 21:10:39.616240  164649 host.go:66] Checking if "multinode-472593" exists ...
	I0108 21:10:39.616537  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.616576  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.631235  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I0108 21:10:39.631630  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.632184  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.632221  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.632566  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.632760  164649 main.go:141] libmachine: (multinode-472593) Calling .GetIP
	I0108 21:10:39.635844  164649 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:10:39.636254  164649 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:10:39.636284  164649 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:10:39.636402  164649 host.go:66] Checking if "multinode-472593" exists ...
	I0108 21:10:39.636699  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.636741  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.652218  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0108 21:10:39.652717  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.653238  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.653263  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.653622  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.653816  164649 main.go:141] libmachine: (multinode-472593) Calling .DriverName
	I0108 21:10:39.654010  164649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:10:39.654035  164649 main.go:141] libmachine: (multinode-472593) Calling .GetSSHHostname
	I0108 21:10:39.657180  164649 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:10:39.657621  164649 main.go:141] libmachine: (multinode-472593) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:79:5e", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:07:46 +0000 UTC Type:0 Mac:52:54:00:18:79:5e Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:multinode-472593 Clientid:01:52:54:00:18:79:5e}
	I0108 21:10:39.657666  164649 main.go:141] libmachine: (multinode-472593) DBG | domain multinode-472593 has defined IP address 192.168.39.250 and MAC address 52:54:00:18:79:5e in network mk-multinode-472593
	I0108 21:10:39.657790  164649 main.go:141] libmachine: (multinode-472593) Calling .GetSSHPort
	I0108 21:10:39.657976  164649 main.go:141] libmachine: (multinode-472593) Calling .GetSSHKeyPath
	I0108 21:10:39.658149  164649 main.go:141] libmachine: (multinode-472593) Calling .GetSSHUsername
	I0108 21:10:39.658316  164649 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593/id_rsa Username:docker}
	I0108 21:10:39.748397  164649 ssh_runner.go:195] Run: systemctl --version
	I0108 21:10:39.757632  164649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:10:39.774837  164649 kubeconfig.go:92] found "multinode-472593" server: "https://192.168.39.250:8443"
	I0108 21:10:39.774863  164649 api_server.go:166] Checking apiserver status ...
	I0108 21:10:39.774892  164649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 21:10:39.786552  164649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1836/cgroup
	I0108 21:10:39.794751  164649 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/podb179c45695f1bdcc29858d4d51fc6758/05835bf9e682cb4e622d9b23b03f35e2e5b05eee41476a8a31372dee9b59e828"
	I0108 21:10:39.794810  164649 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb179c45695f1bdcc29858d4d51fc6758/05835bf9e682cb4e622d9b23b03f35e2e5b05eee41476a8a31372dee9b59e828/freezer.state
	I0108 21:10:39.803062  164649 api_server.go:204] freezer state: "THAWED"
	I0108 21:10:39.803086  164649 api_server.go:253] Checking apiserver healthz at https://192.168.39.250:8443/healthz ...
	I0108 21:10:39.807595  164649 api_server.go:279] https://192.168.39.250:8443/healthz returned 200:
	ok
	I0108 21:10:39.807621  164649 status.go:421] multinode-472593 apiserver status = Running (err=<nil>)
	I0108 21:10:39.807630  164649 status.go:257] multinode-472593 status: &{Name:multinode-472593 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:10:39.807647  164649 status.go:255] checking status of multinode-472593-m02 ...
	I0108 21:10:39.808003  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.808038  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.822591  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0108 21:10:39.823068  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.823523  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.823562  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.823867  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.824018  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetState
	I0108 21:10:39.825509  164649 status.go:330] multinode-472593-m02 host status = "Running" (err=<nil>)
	I0108 21:10:39.825526  164649 host.go:66] Checking if "multinode-472593-m02" exists ...
	I0108 21:10:39.825791  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.825821  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.839629  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45833
	I0108 21:10:39.840020  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.840466  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.840485  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.840760  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.840952  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetIP
	I0108 21:10:39.843479  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:10:39.843920  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:10:39.843944  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:10:39.844086  164649 host.go:66] Checking if "multinode-472593-m02" exists ...
	I0108 21:10:39.844382  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.844412  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.858738  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38615
	I0108 21:10:39.859056  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.859424  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.859448  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.859779  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.859955  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .DriverName
	I0108 21:10:39.860143  164649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 21:10:39.860161  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHHostname
	I0108 21:10:39.862514  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:10:39.862905  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:ba:0a", ip: ""} in network mk-multinode-472593: {Iface:virbr1 ExpiryTime:2024-01-08 22:09:02 +0000 UTC Type:0 Mac:52:54:00:92:ba:0a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:multinode-472593-m02 Clientid:01:52:54:00:92:ba:0a}
	I0108 21:10:39.862936  164649 main.go:141] libmachine: (multinode-472593-m02) DBG | domain multinode-472593-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:92:ba:0a in network mk-multinode-472593
	I0108 21:10:39.863065  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHPort
	I0108 21:10:39.863246  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHKeyPath
	I0108 21:10:39.863429  164649 main.go:141] libmachine: (multinode-472593-m02) Calling .GetSSHUsername
	I0108 21:10:39.863605  164649 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17866-142784/.minikube/machines/multinode-472593-m02/id_rsa Username:docker}
	I0108 21:10:39.956119  164649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 21:10:39.969030  164649 status.go:257] multinode-472593-m02 status: &{Name:multinode-472593-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:10:39.969062  164649 status.go:255] checking status of multinode-472593-m03 ...
	I0108 21:10:39.969484  164649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:10:39.969525  164649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:10:39.984182  164649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38025
	I0108 21:10:39.984673  164649 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:10:39.985118  164649 main.go:141] libmachine: Using API Version  1
	I0108 21:10:39.985140  164649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:10:39.985481  164649 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:10:39.985687  164649 main.go:141] libmachine: (multinode-472593-m03) Calling .GetState
	I0108 21:10:39.987120  164649 status.go:330] multinode-472593-m03 host status = "Stopped" (err=<nil>)
	I0108 21:10:39.987132  164649 status.go:343] host is not running, skipping remaining checks
	I0108 21:10:39.987137  164649 status.go:257] multinode-472593-m03 status: &{Name:multinode-472593-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (253.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-472593
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-472593
E0108 21:12:25.661120  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:12:53.345948  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-472593: (1m55.070328841s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-472593 --wait=true -v=8 --alsologtostderr
E0108 21:13:13.177867  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:13:54.406989  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:14:36.223902  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-472593 --wait=true -v=8 --alsologtostderr: (2m18.373927032s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-472593
--- PASS: TestMultiNode/serial/RestartKeepsNodes (253.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-472593 node delete m03: (1.029602725s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-472593 stop: (25.388954555s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 status: exit status 7 (97.298055ms)

                                                
                                                
-- stdout --
	multinode-472593
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472593-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr: exit status 7 (92.602821ms)

                                                
                                                
-- stdout --
	multinode-472593
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472593-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 21:15:41.438827  166292 out.go:296] Setting OutFile to fd 1 ...
	I0108 21:15:41.438936  166292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:41.438944  166292 out.go:309] Setting ErrFile to fd 2...
	I0108 21:15:41.438949  166292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 21:15:41.439167  166292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17866-142784/.minikube/bin
	I0108 21:15:41.439324  166292 out.go:303] Setting JSON to false
	I0108 21:15:41.439357  166292 mustload.go:65] Loading cluster: multinode-472593
	I0108 21:15:41.439449  166292 notify.go:220] Checking for updates...
	I0108 21:15:41.439829  166292 config.go:182] Loaded profile config "multinode-472593": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0108 21:15:41.439844  166292 status.go:255] checking status of multinode-472593 ...
	I0108 21:15:41.440243  166292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:15:41.440320  166292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:41.456725  166292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42417
	I0108 21:15:41.457153  166292 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:41.457690  166292 main.go:141] libmachine: Using API Version  1
	I0108 21:15:41.457717  166292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:41.458019  166292 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:41.458198  166292 main.go:141] libmachine: (multinode-472593) Calling .GetState
	I0108 21:15:41.459904  166292 status.go:330] multinode-472593 host status = "Stopped" (err=<nil>)
	I0108 21:15:41.459931  166292 status.go:343] host is not running, skipping remaining checks
	I0108 21:15:41.459938  166292 status.go:257] multinode-472593 status: &{Name:multinode-472593 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 21:15:41.459968  166292 status.go:255] checking status of multinode-472593-m02 ...
	I0108 21:15:41.460260  166292 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0108 21:15:41.460301  166292 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 21:15:41.474463  166292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0108 21:15:41.474829  166292 main.go:141] libmachine: () Calling .GetVersion
	I0108 21:15:41.475344  166292 main.go:141] libmachine: Using API Version  1
	I0108 21:15:41.475367  166292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 21:15:41.475656  166292 main.go:141] libmachine: () Calling .GetMachineName
	I0108 21:15:41.475824  166292 main.go:141] libmachine: (multinode-472593-m02) Calling .GetState
	I0108 21:15:41.477256  166292 status.go:330] multinode-472593-m02 host status = "Stopped" (err=<nil>)
	I0108 21:15:41.477267  166292 status.go:343] host is not running, skipping remaining checks
	I0108 21:15:41.477272  166292 status.go:257] multinode-472593-m02 status: &{Name:multinode-472593-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (103.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-472593 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-472593 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m43.315205909s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-472593 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (103.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-472593
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-472593-m02 --driver=kvm2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-472593-m02 --driver=kvm2 : exit status 14 (77.698714ms)

                                                
                                                
-- stdout --
	* [multinode-472593-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-472593-m02' is duplicated with machine name 'multinode-472593-m02' in profile 'multinode-472593'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-472593-m03 --driver=kvm2 
E0108 21:17:25.660647  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:18:13.177943  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-472593-m03 --driver=kvm2 : (50.251528687s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-472593
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-472593: exit status 80 (246.55634ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-472593
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-472593-m03 already exists in multinode-472593-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-472593-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.46s)

                                                
                                    
x
+
TestPreload (170.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-699147 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0108 21:18:54.406623  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-699147 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m28.62129018s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-699147 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-699147 image pull gcr.io/k8s-minikube/busybox: (1.212230749s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-699147
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-699147: (13.11291111s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-699147 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0108 21:20:17.455806  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-699147 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m6.456279447s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-699147 image list
helpers_test.go:175: Cleaning up "test-preload-699147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-699147
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-699147: (1.040296885s)
--- PASS: TestPreload (170.67s)

                                                
                                    
x
+
TestScheduledStopUnix (120.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-144992 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-144992 --memory=2048 --driver=kvm2 : (48.905876567s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144992 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-144992 -n scheduled-stop-144992
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144992 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144992 --cancel-scheduled
E0108 21:22:25.661275  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144992 -n scheduled-stop-144992
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144992
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-144992 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-144992
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-144992: exit status 7 (76.914148ms)

                                                
                                                
-- stdout --
	scheduled-stop-144992
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144992 -n scheduled-stop-144992
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-144992 -n scheduled-stop-144992: exit status 7 (75.350903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-144992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-144992
--- PASS: TestScheduledStopUnix (120.77s)

                                                
                                    
x
+
TestSkaffold (138.83s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe561390060 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-274875 --memory=2600 --driver=kvm2 
E0108 21:23:13.178094  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:23:48.706514  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:23:54.406688  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-274875 --memory=2600 --driver=kvm2 : (48.537677949s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe561390060 run --minikube-profile skaffold-274875 --kube-context skaffold-274875 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe561390060 run --minikube-profile skaffold-274875 --kube-context skaffold-274875 --status-check=true --port-forward=false --interactive=false: (1m17.290288133s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-746b4f5749-4nj9f" [1b807ad6-d8c8-461b-82a1-6470e154190c] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004099631s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-96f4cdb8b-2g9bq" [94192774-42e3-4663-88f6-852c1b1a7a8f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005248143s
helpers_test.go:175: Cleaning up "skaffold-274875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-274875
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-274875: (1.179643739s)
--- PASS: TestSkaffold (138.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (177.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2002826035.exe start -p running-upgrade-186965 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2002826035.exe start -p running-upgrade-186965 --memory=2200 --vm-driver=kvm2 : (1m36.396252683s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-186965 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-186965 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.074110102s)
helpers_test.go:175: Cleaning up "running-upgrade-186965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-186965
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-186965: (2.191357757s)
--- PASS: TestRunningBinaryUpgrade (177.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (235.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m39.112580718s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-201243
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-201243: (13.190914772s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-201243 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-201243 status --format={{.Host}}: exit status 7 (125.658835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
E0108 21:27:25.661252  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (1m11.195375484s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-201243 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (127.306232ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-201243] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-201243
	    minikube start -p kubernetes-upgrade-201243 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2012432 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-201243 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-201243 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (50.657332365s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-201243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-201243
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-201243: (1.202510235s)
--- PASS: TestKubernetesUpgrade (235.69s)

                                                
                                    
x
+
TestPause/serial/Start (117.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-470824 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-470824 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m57.626744959s)
--- PASS: TestPause/serial/Start (117.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (85.170533ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-619576] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17866-142784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17866-142784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (66.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-619576 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-619576 --driver=kvm2 : (1m5.996713971s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-619576 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (66.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (76.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-470824 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-470824 --alsologtostderr -v=1 --driver=kvm2 : (1m16.13339791s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (76.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --driver=kvm2 
E0108 21:28:13.177890  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --driver=kvm2 : (36.697633796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-619576 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-619576 status -o json: exit status 2 (292.396541ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-619576","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-619576
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-619576: (1.072593703s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (32.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-619576 --no-kubernetes --driver=kvm2 : (32.037372909s)
--- PASS: TestNoKubernetes/serial/Start (32.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (228.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.797846189.exe start -p stopped-upgrade-155734 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.797846189.exe start -p stopped-upgrade-155734 --memory=2200 --vm-driver=kvm2 : (1m37.945246005s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.797846189.exe -p stopped-upgrade-155734 stop
E0108 21:30:18.534550  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.539982  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.550292  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.571188  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.611520  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.692344  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:18.852881  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:19.173141  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:19.813600  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.797846189.exe -p stopped-upgrade-155734 stop: (13.079262372s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-155734 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0108 21:30:21.094217  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:23.654641  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:28.775435  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:30:39.016310  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-155734 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m57.469414354s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (228.49s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-470824 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-470824 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-470824 --output=json --layout=cluster: exit status 2 (256.293519ms)

                                                
                                                
-- stdout --
	{"Name":"pause-470824","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-470824","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-470824 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-470824 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-470824 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-619576 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-619576 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.666954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.897693382s)
--- PASS: TestNoKubernetes/serial/ProfileList (14.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-619576
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-619576: (2.207698624s)
--- PASS: TestNoKubernetes/serial/Stop (2.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-619576 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-619576 --driver=kvm2 : (37.830543312s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-619576 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-619576 "sudo systemctl is-active --quiet service kubelet": exit status 1 (523.848232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-155734
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-155734: (1.400546498s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m15.668702758s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0108 21:33:54.406473  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m33.621037971s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r8kn9" [66858c7f-450e-459c-b658-163229f109db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r8kn9" [66858c7f-450e-459c-b658-163229f109db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004607298s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (109.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m49.106910297s)
--- PASS: TestNetworkPlugins/group/calico/Start (109.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m32.932175609s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h7mff" [5263e035-5f66-41f4-8614-b91229b0d42b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.009527713s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zksl7" [c47fc4b2-b39c-4e14-87dc-0f3289db5b93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zksl7" [c47fc4b2-b39c-4e14-87dc-0f3289db5b93] Running
E0108 21:35:18.535247  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.007504793s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (73.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m13.96006134s)
--- PASS: TestNetworkPlugins/group/false/Start (73.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0108 21:35:46.219028  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m42.979105455s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ftmk5" [ece35807-5811-4a25-b14f-e8c2b130530f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.013545246s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9dhj2" [fba40719-303f-466f-b096-2b43e49c8f77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:36:09.412530  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.417859  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.428193  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.448647  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.489024  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.570053  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:09.730836  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:10.051590  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:10.692596  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:36:11.973541  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9dhj2" [fba40719-303f-466f-b096-2b43e49c8f77] Running
E0108 21:36:19.654400  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00571926s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q77vs" [c6488525-a5b9-4ad2-a75f-d784bed15fca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:36:14.534063  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-q77vs" [c6488525-a5b9-4ad2-a75f-d784bed15fca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.006036957s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0108 21:36:29.895284  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m28.026040023s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m35.960384597s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gdv6h" [56afb9df-aa26-4ba2-bf24-4cc429265d00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:36:57.456403  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gdv6h" [56afb9df-aa26-4ba2-bf24-4cc429265d00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004549869s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-557985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m27.153960484s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qq82z" [c6d91e53-6d8b-49c5-a190-5b195cd699bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0108 21:37:31.336711  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qq82z" [c6d91e53-6d8b-49c5-a190-5b195cd699bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012907528s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-302420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-302420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m41.952540577s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nzk8t" [141c7a9b-efeb-49ff-bc8e-b4d56b6b8a40] Running
E0108 21:38:13.177888  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.104228661s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9w6vw" [73330c7a-bcdd-421c-80a8-ddc8f43e9626] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9w6vw" [73330c7a-bcdd-421c-80a8-ddc8f43e9626] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004751988s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dftj8" [d6c49c5e-23e0-48f0-aba2-17af34347f04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dftj8" [d6c49c5e-23e0-48f0-aba2-17af34347f04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.007041622s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-129753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0108 21:38:53.257837  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:38:54.406882  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-129753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m34.81751452s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-557985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-557985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q97lg" [79d67c99-1395-451c-9439-d444e494a825] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q97lg" [79d67c99-1395-451c-9439-d444e494a825] Running
E0108 21:39:05.502487  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.507978  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.518276  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.538596  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.578896  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.659252  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:05.820050  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:06.140664  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:06.780843  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:39:08.061685  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.004570222s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (131.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-248161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-248161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (2m11.789392479s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (131.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-557985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-557985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-841296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E0108 21:39:46.465164  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:40:02.652093  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.657471  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.667832  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.688203  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.728651  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.809008  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:02.970063  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:03.290571  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:03.931235  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:05.212146  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:07.773017  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:12.893590  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:40:18.534870  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:40:23.134487  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-841296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m41.926037777s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-129753 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d6d438ba-4c3f-4a09-8f65-1baae77d1507] Pending
helpers_test.go:344: "busybox" [d6d438ba-4c3f-4a09-8f65-1baae77d1507] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 21:40:27.425862  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d6d438ba-4c3f-4a09-8f65-1baae77d1507] Running
E0108 21:40:28.707078  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004223286s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-129753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-129753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-129753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014948473s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-129753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-129753 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-129753 --alsologtostderr -v=3: (13.149690402s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-302420 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4635c3cb-8f51-4fee-8571-a5a21f40b070] Pending
helpers_test.go:344: "busybox" [4635c3cb-8f51-4fee-8571-a5a21f40b070] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 21:40:43.615070  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4635c3cb-8f51-4fee-8571-a5a21f40b070] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004319304s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-302420 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-129753 -n no-preload-129753
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-129753 -n no-preload-129753: exit status 7 (87.95331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-129753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-129753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-129753 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (5m35.966834815s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-129753 -n no-preload-129753
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-302420 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-302420 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-302420 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-302420 --alsologtostderr -v=3: (13.156927812s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-302420 -n old-k8s-version-302420
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-302420 -n old-k8s-version-302420: exit status 7 (87.757422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-302420 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (459.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-302420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0108 21:41:07.583321  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.588589  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.598869  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.619218  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.659395  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.739762  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:07.900196  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:08.220818  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-302420 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m39.182916288s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-302420 -n old-k8s-version-302420
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (459.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-841296 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85b0336b-bebd-4fff-86d3-cd940535546c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 21:41:08.861554  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:09.003125  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.008435  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.018709  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.039037  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.079377  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.159736  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.320042  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:09.411979  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:41:09.640495  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:10.142611  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:10.280954  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:11.561565  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
helpers_test.go:344: "busybox" [85b0336b-bebd-4fff-86d3-cd940535546c] Running
E0108 21:41:12.703500  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004031173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-841296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-248161 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf4cf149-97ba-4818-9cfd-eced9b0b7326] Pending
helpers_test.go:344: "busybox" [cf4cf149-97ba-4818-9cfd-eced9b0b7326] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 21:41:14.122111  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cf4cf149-97ba-4818-9cfd-eced9b0b7326] Running
E0108 21:41:17.824242  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004333339s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-248161 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-841296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 21:41:19.243225  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-841296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075379794s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-841296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-841296 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-841296 --alsologtostderr -v=3: (13.160680301s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-248161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-248161 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162852452s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-248161 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-248161 --alsologtostderr -v=3
E0108 21:41:24.575603  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:41:28.065165  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:29.484182  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-248161 --alsologtostderr -v=3: (13.172276877s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296: exit status 7 (87.76784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-841296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-841296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-841296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (5m31.660344394s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (331.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248161 -n embed-certs-248161
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248161 -n embed-certs-248161: exit status 7 (119.51799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-248161 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (326.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-248161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E0108 21:41:37.098893  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:41:48.546077  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:41:49.347065  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:41:49.965035  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:41:57.148255  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.153587  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.163905  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.184199  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.224533  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.305297  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.465666  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:57.786513  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:58.426911  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:41:59.708067  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:42:02.268579  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:42:07.389138  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:42:17.629556  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:42:25.661277  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/ingress-addon-legacy-183510/client.crt: no such file or directory
E0108 21:42:28.669077  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.674383  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.684720  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.705013  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.745342  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.825737  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:28.986185  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:29.306738  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:29.507234  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:42:29.947375  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:30.925243  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:42:31.228317  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:33.789031  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:38.110320  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:42:38.909803  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:42:46.495793  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:42:49.150625  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:43:08.252161  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.257503  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.267799  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.288144  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.328469  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.408797  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.569452  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:08.890228  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:09.530931  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:09.631190  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:43:10.811611  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:13.178608  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
E0108 21:43:13.371813  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:18.491964  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:19.071282  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:43:27.485946  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.491254  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.501609  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.521969  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.563105  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.643536  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:27.803855  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:28.124515  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:28.732626  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:28.764802  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:30.045367  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:32.606566  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:37.727473  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:47.967705  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:43:49.212866  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:43:50.592152  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:43:51.427636  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:43:52.846490  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:43:54.406471  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
E0108 21:43:55.742319  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:55.747670  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:55.758029  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:55.778368  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:55.818694  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:55.899056  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:56.059455  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:56.380063  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:57.021172  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:43:58.301779  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:44:00.862669  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:44:05.501771  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:44:05.983272  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:44:08.448081  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:44:16.223408  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:44:30.174039  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:44:33.187818  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/auto-557985/client.crt: no such file or directory
E0108 21:44:36.703959  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:44:40.991807  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
E0108 21:44:49.408269  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:45:02.651592  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:45:12.513286  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/enable-default-cni-557985/client.crt: no such file or directory
E0108 21:45:17.664899  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:45:18.534627  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
E0108 21:45:30.336029  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kindnet-557985/client.crt: no such file or directory
E0108 21:45:52.095245  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:46:07.583206  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:46:09.002544  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
E0108 21:46:09.411491  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/gvisor-282905/client.crt: no such file or directory
E0108 21:46:11.329227  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-248161 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (5m26.286718988s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-248161 -n embed-certs-248161
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (326.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nk8w" [46d3ca8c-3e8d-44eb-9f04-62df9cbd90e8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0108 21:46:35.268398  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/calico-557985/client.crt: no such file or directory
E0108 21:46:36.687333  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/custom-flannel-557985/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nk8w" [46d3ca8c-3e8d-44eb-9f04-62df9cbd90e8] Running
E0108 21:46:39.585935  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
E0108 21:46:41.579199  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/skaffold-274875/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.005693267s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nk8w" [46d3ca8c-3e8d-44eb-9f04-62df9cbd90e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009230305s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-129753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-129753 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-129753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-129753 -n no-preload-129753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-129753 -n no-preload-129753: exit status 2 (281.943303ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-129753 -n no-preload-129753
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-129753 -n no-preload-129753: exit status 2 (290.593576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-129753 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-129753 -n no-preload-129753
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-129753 -n no-preload-129753
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-650530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0108 21:46:57.148242  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/false-557985/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-650530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m10.6519788s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jkzdf" [0ca125ab-015f-43e0-b021-d3d5c3871c23] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005496079s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pl9ml" [9e4a8731-4421-4ba6-a32c-de8c5fb00924] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004954031s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jkzdf" [0ca125ab-015f-43e0-b021-d3d5c3871c23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006867559s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-248161 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pl9ml" [9e4a8731-4421-4ba6-a32c-de8c5fb00924] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005112841s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-841296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-248161 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-248161 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248161 -n embed-certs-248161
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248161 -n embed-certs-248161: exit status 2 (288.72505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248161 -n embed-certs-248161
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248161 -n embed-certs-248161: exit status 2 (292.843324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-248161 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-248161 -n embed-certs-248161
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-248161 -n embed-certs-248161
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-841296 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-841296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296: exit status 2 (284.961089ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296: exit status 2 (343.87229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-841296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-841296 -n default-k8s-diff-port-841296
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-650530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-650530 --alsologtostderr -v=3
E0108 21:48:08.252436  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
E0108 21:48:13.178189  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/addons-188169/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-650530 --alsologtostderr -v=3: (13.130220599s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-650530 -n newest-cni-650530
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-650530 -n newest-cni-650530: exit status 7 (80.591889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-650530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-650530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0108 21:48:27.486021  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
E0108 21:48:35.935844  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/flannel-557985/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-650530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (46.063382124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-650530 -n newest-cni-650530
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kwx5z" [b9c7a831-b320-4725-a386-32f53ff76bb1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004695543s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-kwx5z" [b9c7a831-b320-4725-a386-32f53ff76bb1] Running
E0108 21:48:54.406221  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/functional-733963/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005002909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-302420 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-302420 image list --format=json
E0108 21:48:55.169802  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/bridge-557985/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-302420 --alsologtostderr -v=1
E0108 21:48:55.741953  149988 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17866-142784/.minikube/profiles/kubenet-557985/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-302420 -n old-k8s-version-302420
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-302420 -n old-k8s-version-302420: exit status 2 (283.646085ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-302420 -n old-k8s-version-302420
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-302420 -n old-k8s-version-302420: exit status 2 (251.625469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-302420 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-302420 -n old-k8s-version-302420
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-302420 -n old-k8s-version-302420
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-650530 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-650530 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-650530 -n newest-cni-650530
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-650530 -n newest-cni-650530: exit status 2 (246.572559ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-650530 -n newest-cni-650530
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-650530 -n newest-cni-650530: exit status 2 (253.323692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-650530 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-650530 -n newest-cni-650530
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-650530 -n newest-cni-650530
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    

Test skip (34/329)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
110 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
167 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
200 TestKicCustomNetwork 0
201 TestKicExistingNetwork 0
202 TestKicCustomSubnet 0
203 TestKicStaticIP 0
235 TestChangeNoneUser 0
238 TestScheduledStopWindows 0
242 TestInsufficientStorage 0
246 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/cilium 4.56
263 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-557985 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-557985" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-557985

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-557985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-557985"

                                                
                                                
----------------------- debugLogs end: cilium-557985 [took: 4.386420032s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-557985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-557985
--- SKIP: TestNetworkPlugins/group/cilium (4.56s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-654233" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-654233
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard