Test Report: KVM_Linux 17240

                    
                      ca8bf15b503bfa796ca02bce755f3a2820b75eb7:2023-09-19:31081
                    
                

Test fail (3/317)

Order failed test Duration
213 TestMultiNode/serial/StartAfterStop 21.67
225 TestScheduledStopUnix 53.28
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.88
x
+
TestMultiNode/serial/StartAfterStop (21.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr: exit status 90 (19.068575762s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-415589-m03 in cluster multinode-415589
	* Restarting existing kvm2 VM for "multinode-415589-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:56:39.199901   87826 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:56:39.200157   87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:56:39.200167   87826 out.go:309] Setting ErrFile to fd 2...
	I0919 16:56:39.200172   87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:56:39.200342   87826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 16:56:39.200603   87826 mustload.go:65] Loading cluster: multinode-415589
	I0919 16:56:39.200977   87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:56:39.201345   87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:39.201395   87826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:39.216195   87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0919 16:56:39.216602   87826 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:39.217166   87826 main.go:141] libmachine: Using API Version  1
	I0919 16:56:39.217198   87826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:39.217567   87826 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:39.217779   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
	W0919 16:56:39.219568   87826 host.go:58] "multinode-415589-m03" host status: Stopped
	I0919 16:56:39.221900   87826 out.go:177] * Starting worker node multinode-415589-m03 in cluster multinode-415589
	I0919 16:56:39.223373   87826 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 16:56:39.223648   87826 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0919 16:56:39.223736   87826 cache.go:57] Caching tarball of preloaded images
	I0919 16:56:39.223837   87826 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 16:56:39.223853   87826 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 16:56:39.224110   87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:56:39.224375   87826 start.go:365] acquiring machines lock for multinode-415589-m03: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:56:39.224441   87826 start.go:369] acquired machines lock for "multinode-415589-m03" in 26.176µs
	I0919 16:56:39.224467   87826 start.go:96] Skipping create...Using existing machine configuration
	I0919 16:56:39.224481   87826 fix.go:54] fixHost starting: m03
	I0919 16:56:39.225116   87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:39.225156   87826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:39.239936   87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0919 16:56:39.240271   87826 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:39.240686   87826 main.go:141] libmachine: Using API Version  1
	I0919 16:56:39.240710   87826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:39.241013   87826 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:39.241210   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:39.241372   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
	I0919 16:56:39.242774   87826 fix.go:102] recreateIfNeeded on multinode-415589-m03: state=Stopped err=<nil>
	I0919 16:56:39.242802   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	W0919 16:56:39.242979   87826 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 16:56:39.244671   87826 out.go:177] * Restarting existing kvm2 VM for "multinode-415589-m03" ...
	I0919 16:56:39.245824   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .Start
	I0919 16:56:39.246012   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring networks are active...
	I0919 16:56:39.246675   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network default is active
	I0919 16:56:39.247090   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network mk-multinode-415589 is active
	I0919 16:56:39.247382   87826 main.go:141] libmachine: (multinode-415589-m03) Getting domain xml...
	I0919 16:56:39.247957   87826 main.go:141] libmachine: (multinode-415589-m03) Creating domain...
	I0919 16:56:40.483150   87826 main.go:141] libmachine: (multinode-415589-m03) Waiting to get IP...
	I0919 16:56:40.484175   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:40.484612   87826 main.go:141] libmachine: (multinode-415589-m03) Found IP for machine: 192.168.50.209
	I0919 16:56:40.484649   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has current primary IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:40.484685   87826 main.go:141] libmachine: (multinode-415589-m03) Reserving static IP address...
	I0919 16:56:40.485247   87826 main.go:141] libmachine: (multinode-415589-m03) Reserved static IP address: 192.168.50.209
	I0919 16:56:40.485289   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:40.485314   87826 main.go:141] libmachine: (multinode-415589-m03) Waiting for SSH to be available...
	I0919 16:56:40.485346   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | skip adding static IP to network mk-multinode-415589 - found existing host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"}
	I0919 16:56:40.485363   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Getting to WaitForSSH function...
	I0919 16:56:40.487934   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:40.488393   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:40.488436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:40.488641   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH client type: external
	I0919 16:56:40.488682   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa (-rw-------)
	I0919 16:56:40.488720   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:56:40.488740   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | About to run SSH command:
	I0919 16:56:40.488755   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | exit 0
	I0919 16:56:53.613468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | SSH cmd err, output: <nil>: 
	I0919 16:56:53.613856   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetConfigRaw
	I0919 16:56:53.614493   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
	I0919 16:56:53.616937   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.617401   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.617436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.617724   87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:56:53.617906   87826 machine.go:88] provisioning docker machine ...
	I0919 16:56:53.617923   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:53.618135   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
	I0919 16:56:53.618306   87826 buildroot.go:166] provisioning hostname "multinode-415589-m03"
	I0919 16:56:53.618322   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
	I0919 16:56:53.618468   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:53.620497   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.620805   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.620859   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.620984   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:53.621153   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:53.621331   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:53.621461   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:53.621665   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:53.622159   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:53.622182   87826 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-415589-m03 && echo "multinode-415589-m03" | sudo tee /etc/hostname
	I0919 16:56:53.745954   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m03
	
	I0919 16:56:53.745999   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:53.748693   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.749081   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.749131   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.749287   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:53.749503   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:53.749674   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:53.749823   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:53.749982   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:53.750294   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:53.750312   87826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-415589-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-415589-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:56:53.871256   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:56:53.871299   87826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 16:56:53.871332   87826 buildroot.go:174] setting up certificates
	I0919 16:56:53.871346   87826 provision.go:83] configureAuth start
	I0919 16:56:53.871365   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
	I0919 16:56:53.871708   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
	I0919 16:56:53.874009   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.874436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.874468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.874575   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:53.876929   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.877341   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.877370   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.877499   87826 provision.go:138] copyHostCerts
	I0919 16:56:53.877561   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 16:56:53.877571   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 16:56:53.877663   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 16:56:53.877750   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 16:56:53.877758   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 16:56:53.877782   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 16:56:53.877844   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 16:56:53.877851   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 16:56:53.877871   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 16:56:53.877923   87826 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m03 san=[192.168.50.209 192.168.50.209 localhost 127.0.0.1 minikube multinode-415589-m03]
	I0919 16:56:53.962274   87826 provision.go:172] copyRemoteCerts
	I0919 16:56:53.962335   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:56:53.962360   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:53.965106   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.965469   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:53.965508   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:53.965637   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:53.965819   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:53.965980   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:53.966159   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
	I0919 16:56:54.050135   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 16:56:54.072583   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 16:56:54.093867   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0919 16:56:54.119558   87826 provision.go:86] duration metric: configureAuth took 248.195368ms
	I0919 16:56:54.119582   87826 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:56:54.119795   87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:56:54.119847   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:54.120138   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:54.122462   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.122807   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:54.122857   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.122964   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:54.123158   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.123316   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.123476   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:54.123656   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:54.123955   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:54.123968   87826 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 16:56:54.235038   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 16:56:54.235061   87826 buildroot.go:70] root file system type: tmpfs
	I0919 16:56:54.235224   87826 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 16:56:54.235258   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:54.237841   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.238227   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:54.238265   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.238445   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:54.238630   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.238821   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.238942   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:54.239160   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:54.239526   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:54.239608   87826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 16:56:54.362965   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 16:56:54.363002   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:54.365649   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.366013   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:54.366040   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:54.366202   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:54.366423   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.366593   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:54.366750   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:54.366961   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:54.367396   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:54.367419   87826 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 16:56:55.217276   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 16:56:55.217309   87826 machine.go:91] provisioned docker machine in 1.599388316s
	I0919 16:56:55.217324   87826 start.go:300] post-start starting for "multinode-415589-m03" (driver="kvm2")
	I0919 16:56:55.217338   87826 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:56:55.217386   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:55.217780   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:56:55.217825   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:55.220985   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.221442   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:55.221474   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.221637   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:55.221837   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:55.222041   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:55.222234   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
	I0919 16:56:55.308140   87826 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:56:55.312207   87826 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:56:55.312232   87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 16:56:55.312324   87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 16:56:55.312438   87826 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 16:56:55.312559   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 16:56:55.321552   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 16:56:55.343266   87826 start.go:303] post-start completed in 125.926082ms
	I0919 16:56:55.343292   87826 fix.go:56] fixHost completed within 16.118813076s
	I0919 16:56:55.343314   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:55.346010   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.346433   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:55.346468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.346642   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:55.346830   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:55.346967   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:55.347087   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:55.347273   87826 main.go:141] libmachine: Using SSH client type: native
	I0919 16:56:55.347748   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I0919 16:56:55.347764   87826 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 16:56:55.458471   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142615.405890302
	
	I0919 16:56:55.458492   87826 fix.go:206] guest clock: 1695142615.405890302
	I0919 16:56:55.458500   87826 fix.go:219] Guest: 2023-09-19 16:56:55.405890302 +0000 UTC Remote: 2023-09-19 16:56:55.343296526 +0000 UTC m=+16.174472057 (delta=62.593776ms)
	I0919 16:56:55.458536   87826 fix.go:190] guest clock delta is within tolerance: 62.593776ms
	I0919 16:56:55.458541   87826 start.go:83] releasing machines lock for "multinode-415589-m03", held for 16.23408758s
	I0919 16:56:55.458562   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:55.458895   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
	I0919 16:56:55.461888   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.462317   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:55.462352   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.462489   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:55.463238   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:55.463488   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
	I0919 16:56:55.463594   87826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:56:55.463655   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:55.463780   87826 ssh_runner.go:195] Run: systemctl --version
	I0919 16:56:55.463802   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
	I0919 16:56:55.466416   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.466752   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:55.466791   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.466913   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.466943   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:55.467101   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:55.467219   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:55.467350   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
	I0919 16:56:55.467374   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
	I0919 16:56:55.467386   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
	I0919 16:56:55.467516   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
	I0919 16:56:55.467651   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
	I0919 16:56:55.467782   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
	I0919 16:56:55.467909   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
	I0919 16:56:55.552742   87826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 16:56:55.580877   87826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:56:55.581059   87826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:56:55.599969   87826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:56:55.599994   87826 start.go:469] detecting cgroup driver to use...
	I0919 16:56:55.600169   87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:56:55.618705   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 16:56:55.629933   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 16:56:55.641013   87826 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 16:56:55.641072   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 16:56:55.652627   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:56:55.662867   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 16:56:55.672560   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:56:55.682697   87826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:56:55.693463   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 16:56:55.703435   87826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:56:55.711943   87826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:56:55.720311   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:56:55.826917   87826 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 16:56:55.844596   87826 start.go:469] detecting cgroup driver to use...
	I0919 16:56:55.844704   87826 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 16:56:55.859155   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:56:55.873010   87826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:56:55.890737   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:56:55.903270   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:56:55.915537   87826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 16:56:55.947328   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:56:55.960937   87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:56:55.978060   87826 ssh_runner.go:195] Run: which cri-dockerd
	I0919 16:56:55.981872   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 16:56:55.989568   87826 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 16:56:56.003670   87826 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 16:56:56.112061   87826 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 16:56:56.232698   87826 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 16:56:56.232733   87826 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 16:56:56.249459   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:56:56.356638   87826 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 16:56:57.777045   87826 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.420368194s)
	I0919 16:56:57.777131   87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:56:57.885360   87826 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 16:56:57.997961   87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:56:58.103664   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:56:58.204072   87826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 16:56:58.222608   87826 out.go:177] 
	W0919 16:56:58.223958   87826 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0919 16:56:58.223973   87826 out.go:239] * 
	* 
	W0919 16:56:58.227605   87826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 16:56:58.229244   87826 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0919 16:56:39.199901   87826 out.go:296] Setting OutFile to fd 1 ...
I0919 16:56:39.200157   87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200167   87826 out.go:309] Setting ErrFile to fd 2...
I0919 16:56:39.200172   87826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:56:39.200342   87826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:56:39.200603   87826 mustload.go:65] Loading cluster: multinode-415589
I0919 16:56:39.200977   87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:39.201345   87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.201395   87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.216195   87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
I0919 16:56:39.216602   87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.217166   87826 main.go:141] libmachine: Using API Version  1
I0919 16:56:39.217198   87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.217567   87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.217779   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
W0919 16:56:39.219568   87826 host.go:58] "multinode-415589-m03" host status: Stopped
I0919 16:56:39.221900   87826 out.go:177] * Starting worker node multinode-415589-m03 in cluster multinode-415589
I0919 16:56:39.223373   87826 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
I0919 16:56:39.223648   87826 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
I0919 16:56:39.223736   87826 cache.go:57] Caching tarball of preloaded images
I0919 16:56:39.223837   87826 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0919 16:56:39.223853   87826 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
I0919 16:56:39.224110   87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:39.224375   87826 start.go:365] acquiring machines lock for multinode-415589-m03: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0919 16:56:39.224441   87826 start.go:369] acquired machines lock for "multinode-415589-m03" in 26.176µs
I0919 16:56:39.224467   87826 start.go:96] Skipping create...Using existing machine configuration
I0919 16:56:39.224481   87826 fix.go:54] fixHost starting: m03
I0919 16:56:39.225116   87826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:56:39.225156   87826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:56:39.239936   87826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
I0919 16:56:39.240271   87826 main.go:141] libmachine: () Calling .GetVersion
I0919 16:56:39.240686   87826 main.go:141] libmachine: Using API Version  1
I0919 16:56:39.240710   87826 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:56:39.241013   87826 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:56:39.241210   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:39.241372   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
I0919 16:56:39.242774   87826 fix.go:102] recreateIfNeeded on multinode-415589-m03: state=Stopped err=<nil>
I0919 16:56:39.242802   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
W0919 16:56:39.242979   87826 fix.go:128] unexpected machine state, will restart: <nil>
I0919 16:56:39.244671   87826 out.go:177] * Restarting existing kvm2 VM for "multinode-415589-m03" ...
I0919 16:56:39.245824   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .Start
I0919 16:56:39.246012   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring networks are active...
I0919 16:56:39.246675   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network default is active
I0919 16:56:39.247090   87826 main.go:141] libmachine: (multinode-415589-m03) Ensuring network mk-multinode-415589 is active
I0919 16:56:39.247382   87826 main.go:141] libmachine: (multinode-415589-m03) Getting domain xml...
I0919 16:56:39.247957   87826 main.go:141] libmachine: (multinode-415589-m03) Creating domain...
I0919 16:56:40.483150   87826 main.go:141] libmachine: (multinode-415589-m03) Waiting to get IP...
I0919 16:56:40.484175   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484612   87826 main.go:141] libmachine: (multinode-415589-m03) Found IP for machine: 192.168.50.209
I0919 16:56:40.484649   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has current primary IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.484685   87826 main.go:141] libmachine: (multinode-415589-m03) Reserving static IP address...
I0919 16:56:40.485247   87826 main.go:141] libmachine: (multinode-415589-m03) Reserved static IP address: 192.168.50.209
I0919 16:56:40.485289   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.485314   87826 main.go:141] libmachine: (multinode-415589-m03) Waiting for SSH to be available...
I0919 16:56:40.485346   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | skip adding static IP to network mk-multinode-415589 - found existing host DHCP lease matching {name: "multinode-415589-m03", mac: "52:54:00:7a:de:cd", ip: "192.168.50.209"}
I0919 16:56:40.485363   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Getting to WaitForSSH function...
I0919 16:56:40.487934   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488393   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:55:59 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:40.488436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:40.488641   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH client type: external
I0919 16:56:40.488682   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa (-rw-------)
I0919 16:56:40.488720   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0919 16:56:40.488740   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | About to run SSH command:
I0919 16:56:40.488755   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | exit 0
I0919 16:56:53.613468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | SSH cmd err, output: <nil>: 
I0919 16:56:53.613856   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetConfigRaw
I0919 16:56:53.614493   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.616937   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617401   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.617436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.617724   87826 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
I0919 16:56:53.617906   87826 machine.go:88] provisioning docker machine ...
I0919 16:56:53.617923   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:53.618135   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618306   87826 buildroot.go:166] provisioning hostname "multinode-415589-m03"
I0919 16:56:53.618322   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.618468   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.620497   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620805   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.620859   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.620984   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.621153   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621331   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.621461   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.621665   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.622159   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.622182   87826 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-415589-m03 && echo "multinode-415589-m03" | sudo tee /etc/hostname
I0919 16:56:53.745954   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m03

                                                
                                                
I0919 16:56:53.745999   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.748693   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749081   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.749131   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.749287   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.749503   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749674   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.749823   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.749982   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:53.750294   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:53.750312   87826 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-415589-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-415589-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0919 16:56:53.871256   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0919 16:56:53.871299   87826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
I0919 16:56:53.871332   87826 buildroot.go:174] setting up certificates
I0919 16:56:53.871346   87826 provision.go:83] configureAuth start
I0919 16:56:53.871365   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetMachineName
I0919 16:56:53.871708   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:53.874009   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874436   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.874468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.874575   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.876929   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877341   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.877370   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.877499   87826 provision.go:138] copyHostCerts
I0919 16:56:53.877561   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
I0919 16:56:53.877571   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
I0919 16:56:53.877663   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
I0919 16:56:53.877750   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
I0919 16:56:53.877758   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
I0919 16:56:53.877782   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
I0919 16:56:53.877844   87826 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
I0919 16:56:53.877851   87826 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
I0919 16:56:53.877871   87826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
I0919 16:56:53.877923   87826 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m03 san=[192.168.50.209 192.168.50.209 localhost 127.0.0.1 minikube multinode-415589-m03]
I0919 16:56:53.962274   87826 provision.go:172] copyRemoteCerts
I0919 16:56:53.962335   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0919 16:56:53.962360   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:53.965106   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965469   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:53.965508   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:53.965637   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:53.965819   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:53.965980   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:53.966159   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:54.050135   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0919 16:56:54.072583   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0919 16:56:54.093867   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0919 16:56:54.119558   87826 provision.go:86] duration metric: configureAuth took 248.195368ms
I0919 16:56:54.119582   87826 buildroot.go:189] setting minikube options for container-runtime
I0919 16:56:54.119795   87826 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:56:54.119847   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:54.120138   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.122462   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122807   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.122857   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.122964   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.123158   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123316   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.123476   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.123656   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.123955   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.123968   87826 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0919 16:56:54.235038   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0919 16:56:54.235061   87826 buildroot.go:70] root file system type: tmpfs
I0919 16:56:54.235224   87826 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0919 16:56:54.235258   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.237841   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238227   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.238265   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.238445   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.238630   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238821   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.238942   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.239160   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.239526   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.239608   87826 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0919 16:56:54.362965   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0919 16:56:54.363002   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:54.365649   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366013   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:54.366040   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:54.366202   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:54.366423   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366593   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:54.366750   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:54.366961   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:54.367396   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:54.367419   87826 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0919 16:56:55.217276   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0919 16:56:55.217309   87826 machine.go:91] provisioned docker machine in 1.599388316s
I0919 16:56:55.217324   87826 start.go:300] post-start starting for "multinode-415589-m03" (driver="kvm2")
I0919 16:56:55.217338   87826 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0919 16:56:55.217386   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.217780   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0919 16:56:55.217825   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.220985   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221442   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.221474   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.221637   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.221837   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.222041   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.222234   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.308140   87826 ssh_runner.go:195] Run: cat /etc/os-release
I0919 16:56:55.312207   87826 info.go:137] Remote host: Buildroot 2021.02.12
I0919 16:56:55.312232   87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
I0919 16:56:55.312324   87826 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
I0919 16:56:55.312438   87826 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
I0919 16:56:55.312559   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0919 16:56:55.321552   87826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
I0919 16:56:55.343266   87826 start.go:303] post-start completed in 125.926082ms
I0919 16:56:55.343292   87826 fix.go:56] fixHost completed within 16.118813076s
I0919 16:56:55.343314   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.346010   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346433   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.346468   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.346642   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.346830   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.346967   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.347087   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.347273   87826 main.go:141] libmachine: Using SSH client type: native
I0919 16:56:55.347748   87826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
I0919 16:56:55.347764   87826 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0919 16:56:55.458471   87826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142615.405890302

                                                
                                                
I0919 16:56:55.458492   87826 fix.go:206] guest clock: 1695142615.405890302
I0919 16:56:55.458500   87826 fix.go:219] Guest: 2023-09-19 16:56:55.405890302 +0000 UTC Remote: 2023-09-19 16:56:55.343296526 +0000 UTC m=+16.174472057 (delta=62.593776ms)
I0919 16:56:55.458536   87826 fix.go:190] guest clock delta is within tolerance: 62.593776ms
I0919 16:56:55.458541   87826 start.go:83] releasing machines lock for "multinode-415589-m03", held for 16.23408758s
I0919 16:56:55.458562   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.458895   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetIP
I0919 16:56:55.461888   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462317   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.462352   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.462489   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463238   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463488   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .DriverName
I0919 16:56:55.463594   87826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0919 16:56:55.463655   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.463780   87826 ssh_runner.go:195] Run: systemctl --version
I0919 16:56:55.463802   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHHostname
I0919 16:56:55.466416   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466752   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.466791   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466913   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.466943   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467101   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467219   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467350   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:de:cd", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:56:51 +0000 UTC Type:0 Mac:52:54:00:7a:de:cd Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:multinode-415589-m03 Clientid:01:52:54:00:7a:de:cd}
I0919 16:56:55.467374   87826 main.go:141] libmachine: (multinode-415589-m03) DBG | domain multinode-415589-m03 has defined IP address 192.168.50.209 and MAC address 52:54:00:7a:de:cd in network mk-multinode-415589
I0919 16:56:55.467386   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.467516   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHPort
I0919 16:56:55.467651   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHKeyPath
I0919 16:56:55.467782   87826 main.go:141] libmachine: (multinode-415589-m03) Calling .GetSSHUsername
I0919 16:56:55.467909   87826 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m03/id_rsa Username:docker}
I0919 16:56:55.552742   87826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0919 16:56:55.580877   87826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0919 16:56:55.581059   87826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0919 16:56:55.599969   87826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0919 16:56:55.599994   87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.600169   87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.618705   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0919 16:56:55.629933   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0919 16:56:55.641013   87826 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0919 16:56:55.641072   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0919 16:56:55.652627   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.662867   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0919 16:56:55.672560   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0919 16:56:55.682697   87826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0919 16:56:55.693463   87826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0919 16:56:55.703435   87826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0919 16:56:55.711943   87826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0919 16:56:55.720311   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:55.826917   87826 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0919 16:56:55.844596   87826 start.go:469] detecting cgroup driver to use...
I0919 16:56:55.844704   87826 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0919 16:56:55.859155   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.873010   87826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0919 16:56:55.890737   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0919 16:56:55.903270   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.915537   87826 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0919 16:56:55.947328   87826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0919 16:56:55.960937   87826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0919 16:56:55.978060   87826 ssh_runner.go:195] Run: which cri-dockerd
I0919 16:56:55.981872   87826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0919 16:56:55.989568   87826 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0919 16:56:56.003670   87826 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0919 16:56:56.112061   87826 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0919 16:56:56.232698   87826 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0919 16:56:56.232733   87826 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0919 16:56:56.249459   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:56.356638   87826 ssh_runner.go:195] Run: sudo systemctl restart docker
I0919 16:56:57.777045   87826 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.420368194s)
I0919 16:56:57.777131   87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:57.885360   87826 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0919 16:56:57.997961   87826 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0919 16:56:58.103664   87826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0919 16:56:58.204072   87826 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0919 16:56:58.222608   87826 out.go:177] 
W0919 16:56:58.223958   87826 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
W0919 16:56:58.223973   87826 out.go:239] * 
* 
W0919 16:56:58.227605   87826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0919 16:56:58.229244   87826 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-linux-amd64 -p multinode-415589 node start m03 --alsologtostderr": exit status 90
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status: exit status 2 (574.043943ms)

                                                
                                                
-- stdout --
	multinode-415589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415589-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-415589 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-415589 -n multinode-415589
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-415589 logs -n 25: (1.10180353s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-415589 cp multinode-415589:/home/docker/cp-test.txt                           | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03:/home/docker/cp-test_multinode-415589_multinode-415589-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n multinode-415589-m03 sudo cat                                   | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /home/docker/cp-test_multinode-415589_multinode-415589-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp testdata/cp-test.txt                                                | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589:/home/docker/cp-test_multinode-415589-m02_multinode-415589.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n multinode-415589 sudo cat                                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /home/docker/cp-test_multinode-415589-m02_multinode-415589.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03:/home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n multinode-415589-m03 sudo cat                                   | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp testdata/cp-test.txt                                                | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589:/home/docker/cp-test_multinode-415589-m03_multinode-415589.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n multinode-415589 sudo cat                                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /home/docker/cp-test_multinode-415589-m03_multinode-415589.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt                       | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m02:/home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n                                                                 | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | multinode-415589-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-415589 ssh -n multinode-415589-m02 sudo cat                                   | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	|         | /home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-415589 node stop m03                                                          | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC | 19 Sep 23 16:56 UTC |
	| node    | multinode-415589 node start                                                             | multinode-415589 | jenkins | v1.31.2 | 19 Sep 23 16:56 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:53:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:53:23.713976   85253 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:53:23.714258   85253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:53:23.714268   85253 out.go:309] Setting ErrFile to fd 2...
	I0919 16:53:23.714276   85253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:53:23.714513   85253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 16:53:23.715102   85253 out.go:303] Setting JSON to false
	I0919 16:53:23.716008   85253 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5517,"bootTime":1695136887,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:53:23.716066   85253 start.go:138] virtualization: kvm guest
	I0919 16:53:23.718848   85253 out.go:177] * [multinode-415589] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:53:23.720695   85253 notify.go:220] Checking for updates...
	I0919 16:53:23.720705   85253 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:53:23.722480   85253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:53:23.724037   85253 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:53:23.725431   85253 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:53:23.726676   85253 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:53:23.727940   85253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:53:23.729336   85253 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:53:23.764031   85253 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 16:53:23.765335   85253 start.go:298] selected driver: kvm2
	I0919 16:53:23.765351   85253 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:53:23.765365   85253 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:53:23.766091   85253 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:53:23.766179   85253 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-65689/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:53:23.780403   85253 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:53:23.780470   85253 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:53:23.780799   85253 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 16:53:23.780844   85253 cni.go:84] Creating CNI manager for ""
	I0919 16:53:23.780858   85253 cni.go:136] 0 nodes found, recommending kindnet
	I0919 16:53:23.780868   85253 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 16:53:23.780884   85253 start_flags.go:321] config:
	{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:53:23.781058   85253 iso.go:125] acquiring lock: {Name:mkdf0d42546c83faf1a624ccdb8d9876db7a1a92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:53:23.783366   85253 out.go:177] * Starting control plane node multinode-415589 in cluster multinode-415589
	I0919 16:53:23.785163   85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 16:53:23.785194   85253 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0919 16:53:23.785201   85253 cache.go:57] Caching tarball of preloaded images
	I0919 16:53:23.785300   85253 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 16:53:23.785311   85253 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 16:53:23.786488   85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:53:23.786551   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json: {Name:mk76d9cce25713484142aeb499f9fb85a87b44c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:23.786965   85253 start.go:365] acquiring machines lock for multinode-415589: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:53:23.787014   85253 start.go:369] acquired machines lock for "multinode-415589" in 27.275µs
	I0919 16:53:23.787037   85253 start.go:93] Provisioning new machine with config: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 16:53:23.787141   85253 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 16:53:23.788733   85253 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 16:53:23.788876   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:53:23.788930   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:53:23.802270   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0919 16:53:23.802693   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:53:23.803288   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:53:23.803309   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:53:23.803609   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:53:23.803768   85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
	I0919 16:53:23.803890   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:23.804020   85253 start.go:159] libmachine.API.Create for "multinode-415589" (driver="kvm2")
	I0919 16:53:23.804049   85253 client.go:168] LocalClient.Create starting
	I0919 16:53:23.804080   85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem
	I0919 16:53:23.804113   85253 main.go:141] libmachine: Decoding PEM data...
	I0919 16:53:23.804128   85253 main.go:141] libmachine: Parsing certificate...
	I0919 16:53:23.804178   85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem
	I0919 16:53:23.804197   85253 main.go:141] libmachine: Decoding PEM data...
	I0919 16:53:23.804212   85253 main.go:141] libmachine: Parsing certificate...
	I0919 16:53:23.804229   85253 main.go:141] libmachine: Running pre-create checks...
	I0919 16:53:23.804239   85253 main.go:141] libmachine: (multinode-415589) Calling .PreCreateCheck
	I0919 16:53:23.804541   85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
	I0919 16:53:23.804879   85253 main.go:141] libmachine: Creating machine...
	I0919 16:53:23.804893   85253 main.go:141] libmachine: (multinode-415589) Calling .Create
	I0919 16:53:23.805014   85253 main.go:141] libmachine: (multinode-415589) Creating KVM machine...
	I0919 16:53:23.806092   85253 main.go:141] libmachine: (multinode-415589) DBG | found existing default KVM network
	I0919 16:53:23.806740   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.806613   85275 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d8:d6:c0} reservation:<nil>}
	I0919 16:53:23.807272   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.807204   85275 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f8e0}
	I0919 16:53:23.812325   85253 main.go:141] libmachine: (multinode-415589) DBG | trying to create private KVM network mk-multinode-415589 192.168.50.0/24...
	I0919 16:53:23.882095   85253 main.go:141] libmachine: (multinode-415589) DBG | private KVM network mk-multinode-415589 192.168.50.0/24 created
	I0919 16:53:23.882150   85253 main.go:141] libmachine: (multinode-415589) Setting up store path in /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 ...
	I0919 16:53:23.882167   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:23.882055   85275 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:53:23.882189   85253 main.go:141] libmachine: (multinode-415589) Building disk image from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:53:23.882213   85253 main.go:141] libmachine: (multinode-415589) Downloading /home/jenkins/minikube-integration/17240-65689/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 16:53:24.095846   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.095706   85275 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa...
	I0919 16:53:24.564281   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.564126   85275 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/multinode-415589.rawdisk...
	I0919 16:53:24.564323   85253 main.go:141] libmachine: (multinode-415589) DBG | Writing magic tar header
	I0919 16:53:24.564337   85253 main.go:141] libmachine: (multinode-415589) DBG | Writing SSH key tar header
	I0919 16:53:24.564353   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:24.564257   85275 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 ...
	I0919 16:53:24.564393   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589
	I0919 16:53:24.564410   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines
	I0919 16:53:24.564419   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589 (perms=drwx------)
	I0919 16:53:24.564429   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:53:24.564444   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines (perms=drwxr-xr-x)
	I0919 16:53:24.564462   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689
	I0919 16:53:24.564478   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 16:53:24.564489   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home/jenkins
	I0919 16:53:24.564503   85253 main.go:141] libmachine: (multinode-415589) DBG | Checking permissions on dir: /home
	I0919 16:53:24.564514   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube (perms=drwxr-xr-x)
	I0919 16:53:24.564525   85253 main.go:141] libmachine: (multinode-415589) DBG | Skipping /home - not owner
	I0919 16:53:24.564541   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689 (perms=drwxrwxr-x)
	I0919 16:53:24.564557   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 16:53:24.564572   85253 main.go:141] libmachine: (multinode-415589) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 16:53:24.564587   85253 main.go:141] libmachine: (multinode-415589) Creating domain...
	I0919 16:53:24.565801   85253 main.go:141] libmachine: (multinode-415589) define libvirt domain using xml: 
	I0919 16:53:24.565839   85253 main.go:141] libmachine: (multinode-415589) <domain type='kvm'>
	I0919 16:53:24.565849   85253 main.go:141] libmachine: (multinode-415589)   <name>multinode-415589</name>
	I0919 16:53:24.565860   85253 main.go:141] libmachine: (multinode-415589)   <memory unit='MiB'>2200</memory>
	I0919 16:53:24.565869   85253 main.go:141] libmachine: (multinode-415589)   <vcpu>2</vcpu>
	I0919 16:53:24.565874   85253 main.go:141] libmachine: (multinode-415589)   <features>
	I0919 16:53:24.565881   85253 main.go:141] libmachine: (multinode-415589)     <acpi/>
	I0919 16:53:24.565886   85253 main.go:141] libmachine: (multinode-415589)     <apic/>
	I0919 16:53:24.565894   85253 main.go:141] libmachine: (multinode-415589)     <pae/>
	I0919 16:53:24.565902   85253 main.go:141] libmachine: (multinode-415589)     
	I0919 16:53:24.565911   85253 main.go:141] libmachine: (multinode-415589)   </features>
	I0919 16:53:24.565917   85253 main.go:141] libmachine: (multinode-415589)   <cpu mode='host-passthrough'>
	I0919 16:53:24.565925   85253 main.go:141] libmachine: (multinode-415589)   
	I0919 16:53:24.565930   85253 main.go:141] libmachine: (multinode-415589)   </cpu>
	I0919 16:53:24.565970   85253 main.go:141] libmachine: (multinode-415589)   <os>
	I0919 16:53:24.565995   85253 main.go:141] libmachine: (multinode-415589)     <type>hvm</type>
	I0919 16:53:24.566018   85253 main.go:141] libmachine: (multinode-415589)     <boot dev='cdrom'/>
	I0919 16:53:24.566033   85253 main.go:141] libmachine: (multinode-415589)     <boot dev='hd'/>
	I0919 16:53:24.566047   85253 main.go:141] libmachine: (multinode-415589)     <bootmenu enable='no'/>
	I0919 16:53:24.566059   85253 main.go:141] libmachine: (multinode-415589)   </os>
	I0919 16:53:24.566072   85253 main.go:141] libmachine: (multinode-415589)   <devices>
	I0919 16:53:24.566087   85253 main.go:141] libmachine: (multinode-415589)     <disk type='file' device='cdrom'>
	I0919 16:53:24.566105   85253 main.go:141] libmachine: (multinode-415589)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/boot2docker.iso'/>
	I0919 16:53:24.566120   85253 main.go:141] libmachine: (multinode-415589)       <target dev='hdc' bus='scsi'/>
	I0919 16:53:24.566133   85253 main.go:141] libmachine: (multinode-415589)       <readonly/>
	I0919 16:53:24.566150   85253 main.go:141] libmachine: (multinode-415589)     </disk>
	I0919 16:53:24.566165   85253 main.go:141] libmachine: (multinode-415589)     <disk type='file' device='disk'>
	I0919 16:53:24.566180   85253 main.go:141] libmachine: (multinode-415589)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 16:53:24.566198   85253 main.go:141] libmachine: (multinode-415589)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/multinode-415589.rawdisk'/>
	I0919 16:53:24.566213   85253 main.go:141] libmachine: (multinode-415589)       <target dev='hda' bus='virtio'/>
	I0919 16:53:24.566234   85253 main.go:141] libmachine: (multinode-415589)     </disk>
	I0919 16:53:24.566253   85253 main.go:141] libmachine: (multinode-415589)     <interface type='network'>
	I0919 16:53:24.566263   85253 main.go:141] libmachine: (multinode-415589)       <source network='mk-multinode-415589'/>
	I0919 16:53:24.566269   85253 main.go:141] libmachine: (multinode-415589)       <model type='virtio'/>
	I0919 16:53:24.566278   85253 main.go:141] libmachine: (multinode-415589)     </interface>
	I0919 16:53:24.566283   85253 main.go:141] libmachine: (multinode-415589)     <interface type='network'>
	I0919 16:53:24.566292   85253 main.go:141] libmachine: (multinode-415589)       <source network='default'/>
	I0919 16:53:24.566298   85253 main.go:141] libmachine: (multinode-415589)       <model type='virtio'/>
	I0919 16:53:24.566306   85253 main.go:141] libmachine: (multinode-415589)     </interface>
	I0919 16:53:24.566316   85253 main.go:141] libmachine: (multinode-415589)     <serial type='pty'>
	I0919 16:53:24.566330   85253 main.go:141] libmachine: (multinode-415589)       <target port='0'/>
	I0919 16:53:24.566343   85253 main.go:141] libmachine: (multinode-415589)     </serial>
	I0919 16:53:24.566351   85253 main.go:141] libmachine: (multinode-415589)     <console type='pty'>
	I0919 16:53:24.566362   85253 main.go:141] libmachine: (multinode-415589)       <target type='serial' port='0'/>
	I0919 16:53:24.566371   85253 main.go:141] libmachine: (multinode-415589)     </console>
	I0919 16:53:24.566377   85253 main.go:141] libmachine: (multinode-415589)     <rng model='virtio'>
	I0919 16:53:24.566384   85253 main.go:141] libmachine: (multinode-415589)       <backend model='random'>/dev/random</backend>
	I0919 16:53:24.566394   85253 main.go:141] libmachine: (multinode-415589)     </rng>
	I0919 16:53:24.566400   85253 main.go:141] libmachine: (multinode-415589)     
	I0919 16:53:24.566405   85253 main.go:141] libmachine: (multinode-415589)     
	I0919 16:53:24.566411   85253 main.go:141] libmachine: (multinode-415589)   </devices>
	I0919 16:53:24.566418   85253 main.go:141] libmachine: (multinode-415589) </domain>
	I0919 16:53:24.566427   85253 main.go:141] libmachine: (multinode-415589) 
	I0919 16:53:24.570337   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:34:f3:25 in network default
	I0919 16:53:24.570941   85253 main.go:141] libmachine: (multinode-415589) Ensuring networks are active...
	I0919 16:53:24.570966   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:24.571696   85253 main.go:141] libmachine: (multinode-415589) Ensuring network default is active
	I0919 16:53:24.572035   85253 main.go:141] libmachine: (multinode-415589) Ensuring network mk-multinode-415589 is active
	I0919 16:53:24.572581   85253 main.go:141] libmachine: (multinode-415589) Getting domain xml...
	I0919 16:53:24.573336   85253 main.go:141] libmachine: (multinode-415589) Creating domain...
	I0919 16:53:25.782983   85253 main.go:141] libmachine: (multinode-415589) Waiting to get IP...
	I0919 16:53:25.783879   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:25.784331   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:25.784366   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:25.784312   85275 retry.go:31] will retry after 252.974185ms: waiting for machine to come up
	I0919 16:53:26.038922   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:26.039386   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:26.039414   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.039308   85275 retry.go:31] will retry after 358.552851ms: waiting for machine to come up
	I0919 16:53:26.399726   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:26.400173   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:26.400216   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.400122   85275 retry.go:31] will retry after 311.756361ms: waiting for machine to come up
	I0919 16:53:26.713663   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:26.714166   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:26.714189   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:26.714114   85275 retry.go:31] will retry after 503.231809ms: waiting for machine to come up
	I0919 16:53:27.218721   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:27.219145   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:27.219193   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:27.219087   85275 retry.go:31] will retry after 722.334547ms: waiting for machine to come up
	I0919 16:53:27.942991   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:27.943444   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:27.943484   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:27.943402   85275 retry.go:31] will retry after 906.092251ms: waiting for machine to come up
	I0919 16:53:28.850606   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:28.850997   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:28.851055   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:28.850978   85275 retry.go:31] will retry after 993.305084ms: waiting for machine to come up
	I0919 16:53:29.846159   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:29.846687   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:29.846720   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:29.846589   85275 retry.go:31] will retry after 1.181964129s: waiting for machine to come up
	I0919 16:53:31.030026   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:31.030546   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:31.030580   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:31.030471   85275 retry.go:31] will retry after 1.503627047s: waiting for machine to come up
	I0919 16:53:32.536090   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:32.536662   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:32.536687   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:32.536601   85275 retry.go:31] will retry after 2.132959485s: waiting for machine to come up
	I0919 16:53:34.671533   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:34.672140   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:34.672180   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:34.672088   85275 retry.go:31] will retry after 1.835249108s: waiting for machine to come up
	I0919 16:53:36.510708   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:36.511209   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:36.511239   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:36.511191   85275 retry.go:31] will retry after 2.854076315s: waiting for machine to come up
	I0919 16:53:39.366850   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:39.367241   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:39.367283   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:39.367193   85275 retry.go:31] will retry after 2.736485042s: waiting for machine to come up
	I0919 16:53:42.107079   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:42.107489   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find current IP address of domain multinode-415589 in network mk-multinode-415589
	I0919 16:53:42.107515   85253 main.go:141] libmachine: (multinode-415589) DBG | I0919 16:53:42.107430   85275 retry.go:31] will retry after 3.431002257s: waiting for machine to come up
	I0919 16:53:45.540721   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.541204   85253 main.go:141] libmachine: (multinode-415589) Found IP for machine: 192.168.50.11
	I0919 16:53:45.541222   85253 main.go:141] libmachine: (multinode-415589) Reserving static IP address...
	I0919 16:53:45.541232   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has current primary IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.541644   85253 main.go:141] libmachine: (multinode-415589) DBG | unable to find host DHCP lease matching {name: "multinode-415589", mac: "52:54:00:a4:6c:54", ip: "192.168.50.11"} in network mk-multinode-415589
	I0919 16:53:45.612920   85253 main.go:141] libmachine: (multinode-415589) DBG | Getting to WaitForSSH function...
	I0919 16:53:45.612959   85253 main.go:141] libmachine: (multinode-415589) Reserved static IP address: 192.168.50.11
	I0919 16:53:45.613017   85253 main.go:141] libmachine: (multinode-415589) Waiting for SSH to be available...
	I0919 16:53:45.615527   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.615904   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:45.615948   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.616103   85253 main.go:141] libmachine: (multinode-415589) DBG | Using SSH client type: external
	I0919 16:53:45.616148   85253 main.go:141] libmachine: (multinode-415589) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa (-rw-------)
	I0919 16:53:45.616196   85253 main.go:141] libmachine: (multinode-415589) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:53:45.616219   85253 main.go:141] libmachine: (multinode-415589) DBG | About to run SSH command:
	I0919 16:53:45.616239   85253 main.go:141] libmachine: (multinode-415589) DBG | exit 0
	I0919 16:53:45.713404   85253 main.go:141] libmachine: (multinode-415589) DBG | SSH cmd err, output: <nil>: 
	I0919 16:53:45.713684   85253 main.go:141] libmachine: (multinode-415589) KVM machine creation complete!
	I0919 16:53:45.713939   85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
	I0919 16:53:45.714622   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:45.714861   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:45.715018   85253 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 16:53:45.715036   85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 16:53:45.716280   85253 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 16:53:45.716327   85253 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 16:53:45.716334   85253 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 16:53:45.716341   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:45.718601   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.718916   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:45.718942   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.719071   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:45.719260   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.719405   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.719528   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:45.719685   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:45.720119   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:45.720137   85253 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 16:53:45.848916   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:53:45.848937   85253 main.go:141] libmachine: Detecting the provisioner...
	I0919 16:53:45.848945   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:45.851880   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.852261   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:45.852302   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.852488   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:45.852694   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.852886   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.853072   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:45.853259   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:45.853760   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:45.853776   85253 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 16:53:45.982231   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 16:53:45.982317   85253 main.go:141] libmachine: found compatible host: buildroot
	I0919 16:53:45.982330   85253 main.go:141] libmachine: Provisioning with buildroot...
	I0919 16:53:45.982340   85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
	I0919 16:53:45.982612   85253 buildroot.go:166] provisioning hostname "multinode-415589"
	I0919 16:53:45.982635   85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
	I0919 16:53:45.982835   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:45.985679   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.986006   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:45.986027   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:45.986340   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:45.986550   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.986740   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:45.986918   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:45.987151   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:45.987472   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:45.987487   85253 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-415589 && echo "multinode-415589" | sudo tee /etc/hostname
	I0919 16:53:46.130414   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589
	
	I0919 16:53:46.130455   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.133233   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.133645   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.133682   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.133829   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:46.134026   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.134189   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.134342   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:46.134511   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:46.134836   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:46.134853   85253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-415589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-415589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:53:46.272842   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:53:46.272872   85253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 16:53:46.272921   85253 buildroot.go:174] setting up certificates
	I0919 16:53:46.272944   85253 provision.go:83] configureAuth start
	I0919 16:53:46.272972   85253 main.go:141] libmachine: (multinode-415589) Calling .GetMachineName
	I0919 16:53:46.273307   85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
	I0919 16:53:46.275860   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.276232   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.276288   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.276389   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.278401   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.278721   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.278754   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.278874   85253 provision.go:138] copyHostCerts
	I0919 16:53:46.278907   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 16:53:46.278969   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 16:53:46.278981   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 16:53:46.279043   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 16:53:46.279149   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 16:53:46.279176   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 16:53:46.279183   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 16:53:46.279218   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 16:53:46.279295   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 16:53:46.279316   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 16:53:46.279323   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 16:53:46.279350   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 16:53:46.279411   85253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589 san=[192.168.50.11 192.168.50.11 localhost 127.0.0.1 minikube multinode-415589]
	I0919 16:53:46.414692   85253 provision.go:172] copyRemoteCerts
	I0919 16:53:46.414763   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:53:46.414813   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.417481   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.417794   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.417830   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.417971   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:46.418131   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.418238   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:46.418351   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:53:46.510528   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 16:53:46.510602   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 16:53:46.533565   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 16:53:46.533649   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 16:53:46.556587   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 16:53:46.556651   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 16:53:46.578928   85253 provision.go:86] duration metric: configureAuth took 305.966092ms
	I0919 16:53:46.578952   85253 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:53:46.579161   85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:53:46.579191   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:46.579510   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.582101   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.582507   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.582540   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.582654   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:46.582845   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.582960   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.583146   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:46.583286   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:46.583592   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:46.583604   85253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 16:53:46.715173   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 16:53:46.715197   85253 buildroot.go:70] root file system type: tmpfs
	I0919 16:53:46.715388   85253 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 16:53:46.715428   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.718215   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.718600   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.718649   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.718781   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:46.718949   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.719107   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.719220   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:46.719382   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:46.719688   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:46.719756   85253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 16:53:46.862632   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 16:53:46.862674   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:46.865253   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.865654   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:46.865686   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:46.865877   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:46.866082   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.866283   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:46.866437   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:46.866639   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:46.867022   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:46.867043   85253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 16:53:47.689007   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 16:53:47.689047   85253 main.go:141] libmachine: Checking connection to Docker...
	I0919 16:53:47.689064   85253 main.go:141] libmachine: (multinode-415589) Calling .GetURL
	I0919 16:53:47.690339   85253 main.go:141] libmachine: (multinode-415589) DBG | Using libvirt version 6000000
	I0919 16:53:47.692513   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.692835   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.692867   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.693034   85253 main.go:141] libmachine: Docker is up and running!
	I0919 16:53:47.693051   85253 main.go:141] libmachine: Reticulating splines...
	I0919 16:53:47.693065   85253 client.go:171] LocalClient.Create took 23.888998966s
	I0919 16:53:47.693088   85253 start.go:167] duration metric: libmachine.API.Create for "multinode-415589" took 23.889070559s
	I0919 16:53:47.693098   85253 start.go:300] post-start starting for "multinode-415589" (driver="kvm2")
	I0919 16:53:47.693107   85253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:53:47.693124   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:47.693386   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:53:47.693413   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:47.695565   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.695907   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.695940   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.696026   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:47.696190   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:47.696366   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:47.696513   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:53:47.791129   85253 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:53:47.795143   85253 command_runner.go:130] > NAME=Buildroot
	I0919 16:53:47.795164   85253 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 16:53:47.795170   85253 command_runner.go:130] > ID=buildroot
	I0919 16:53:47.795175   85253 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 16:53:47.795180   85253 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 16:53:47.795380   85253 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:53:47.795400   85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 16:53:47.795465   85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 16:53:47.795573   85253 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 16:53:47.795587   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /etc/ssl/certs/733972.pem
	I0919 16:53:47.795697   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 16:53:47.803841   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 16:53:47.827423   85253 start.go:303] post-start completed in 134.313518ms
	I0919 16:53:47.827470   85253 main.go:141] libmachine: (multinode-415589) Calling .GetConfigRaw
	I0919 16:53:47.828046   85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
	I0919 16:53:47.830771   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.831133   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.831167   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.831467   85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:53:47.831681   85253 start.go:128] duration metric: createHost completed in 24.044529067s
	I0919 16:53:47.831712   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:47.834010   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.834358   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.834393   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.834504   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:47.834717   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:47.834866   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:47.834987   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:47.835153   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:53:47.835515   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.11 22 <nil> <nil>}
	I0919 16:53:47.835529   85253 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 16:53:47.970730   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142427.940641526
	
	I0919 16:53:47.970753   85253 fix.go:206] guest clock: 1695142427.940641526
	I0919 16:53:47.970762   85253 fix.go:219] Guest: 2023-09-19 16:53:47.940641526 +0000 UTC Remote: 2023-09-19 16:53:47.831697205 +0000 UTC m=+24.148141812 (delta=108.944321ms)
	I0919 16:53:47.970812   85253 fix.go:190] guest clock delta is within tolerance: 108.944321ms
	I0919 16:53:47.970820   85253 start.go:83] releasing machines lock for "multinode-415589", held for 24.183793705s
	I0919 16:53:47.970853   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:47.971128   85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
	I0919 16:53:47.973546   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.973887   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.973922   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.974000   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:47.974567   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:47.974733   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:53:47.974818   85253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:53:47.974870   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:47.974956   85253 ssh_runner.go:195] Run: cat /version.json
	I0919 16:53:47.974982   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:53:47.977511   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.977736   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.977996   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.978019   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.978169   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:47.978295   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:47.978325   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:47.978342   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:47.978506   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:53:47.978515   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:47.978696   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:53:47.978712   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:53:47.978870   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:53:47.979016   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:53:48.074567   85253 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I0919 16:53:48.074953   85253 ssh_runner.go:195] Run: systemctl --version
	I0919 16:53:48.100924   85253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 16:53:48.100983   85253 command_runner.go:130] > systemd 247 (247)
	I0919 16:53:48.101006   85253 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0919 16:53:48.101086   85253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 16:53:48.106790   85253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 16:53:48.106848   85253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:53:48.106902   85253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:53:48.123897   85253 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0919 16:53:48.124310   85253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:53:48.124337   85253 start.go:469] detecting cgroup driver to use...
	I0919 16:53:48.124477   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:53:48.140085   85253 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0919 16:53:48.140516   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 16:53:48.150715   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 16:53:48.160723   85253 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 16:53:48.160782   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 16:53:48.170725   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:53:48.180864   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 16:53:48.190799   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:53:48.200759   85253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:53:48.210920   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 16:53:48.220562   85253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:53:48.229676   85253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 16:53:48.229735   85253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:53:48.238521   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:53:48.338814   85253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 16:53:48.357869   85253 start.go:469] detecting cgroup driver to use...
	I0919 16:53:48.357971   85253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 16:53:48.370639   85253 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0919 16:53:48.371471   85253 command_runner.go:130] > [Unit]
	I0919 16:53:48.371490   85253 command_runner.go:130] > Description=Docker Application Container Engine
	I0919 16:53:48.371504   85253 command_runner.go:130] > Documentation=https://docs.docker.com
	I0919 16:53:48.371518   85253 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0919 16:53:48.371530   85253 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0919 16:53:48.371541   85253 command_runner.go:130] > StartLimitBurst=3
	I0919 16:53:48.371548   85253 command_runner.go:130] > StartLimitIntervalSec=60
	I0919 16:53:48.371552   85253 command_runner.go:130] > [Service]
	I0919 16:53:48.371557   85253 command_runner.go:130] > Type=notify
	I0919 16:53:48.371561   85253 command_runner.go:130] > Restart=on-failure
	I0919 16:53:48.371571   85253 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0919 16:53:48.371581   85253 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0919 16:53:48.371589   85253 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0919 16:53:48.371601   85253 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0919 16:53:48.371615   85253 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0919 16:53:48.371629   85253 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0919 16:53:48.371645   85253 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0919 16:53:48.371657   85253 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0919 16:53:48.371666   85253 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0919 16:53:48.371673   85253 command_runner.go:130] > ExecStart=
	I0919 16:53:48.371689   85253 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0919 16:53:48.371699   85253 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0919 16:53:48.371713   85253 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0919 16:53:48.371727   85253 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0919 16:53:48.371739   85253 command_runner.go:130] > LimitNOFILE=infinity
	I0919 16:53:48.371749   85253 command_runner.go:130] > LimitNPROC=infinity
	I0919 16:53:48.371756   85253 command_runner.go:130] > LimitCORE=infinity
	I0919 16:53:48.371765   85253 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0919 16:53:48.371771   85253 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0919 16:53:48.371776   85253 command_runner.go:130] > TasksMax=infinity
	I0919 16:53:48.371780   85253 command_runner.go:130] > TimeoutStartSec=0
	I0919 16:53:48.371786   85253 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0919 16:53:48.371793   85253 command_runner.go:130] > Delegate=yes
	I0919 16:53:48.371799   85253 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0919 16:53:48.371808   85253 command_runner.go:130] > KillMode=process
	I0919 16:53:48.371815   85253 command_runner.go:130] > [Install]
	I0919 16:53:48.371832   85253 command_runner.go:130] > WantedBy=multi-user.target
	I0919 16:53:48.372071   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:53:48.383901   85253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:53:48.402079   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:53:48.413580   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:53:48.425486   85253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 16:53:48.451047   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:53:48.463426   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:53:48.480146   85253 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0919 16:53:48.480546   85253 ssh_runner.go:195] Run: which cri-dockerd
	I0919 16:53:48.484165   85253 command_runner.go:130] > /usr/bin/cri-dockerd
	I0919 16:53:48.484277   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 16:53:48.492192   85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 16:53:48.507705   85253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 16:53:48.607130   85253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 16:53:48.719205   85253 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 16:53:48.719240   85253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 16:53:48.735474   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:53:48.837757   85253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 16:53:50.243142   85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405328532s)
	I0919 16:53:50.243221   85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:53:50.343223   85253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 16:53:50.450233   85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:53:50.563110   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:53:50.687287   85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 16:53:50.707191   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:53:50.823936   85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 16:53:50.925971   85253 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 16:53:50.926046   85253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 16:53:50.933114   85253 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0919 16:53:50.933131   85253 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 16:53:50.933137   85253 command_runner.go:130] > Device: 16h/22d	Inode: 875         Links: 1
	I0919 16:53:50.933144   85253 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0919 16:53:50.933149   85253 command_runner.go:130] > Access: 2023-09-19 16:53:50.814533213 +0000
	I0919 16:53:50.933154   85253 command_runner.go:130] > Modify: 2023-09-19 16:53:50.814533213 +0000
	I0919 16:53:50.933159   85253 command_runner.go:130] > Change: 2023-09-19 16:53:50.817537984 +0000
	I0919 16:53:50.933163   85253 command_runner.go:130] >  Birth: -
	I0919 16:53:50.933368   85253 start.go:537] Will wait 60s for crictl version
	I0919 16:53:50.933417   85253 ssh_runner.go:195] Run: which crictl
	I0919 16:53:50.938241   85253 command_runner.go:130] > /usr/bin/crictl
	I0919 16:53:50.938302   85253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 16:53:50.994244   85253 command_runner.go:130] > Version:  0.1.0
	I0919 16:53:50.994273   85253 command_runner.go:130] > RuntimeName:  docker
	I0919 16:53:50.994295   85253 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0919 16:53:50.994403   85253 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 16:53:50.996201   85253 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 16:53:50.996264   85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 16:53:51.024171   85253 command_runner.go:130] > 24.0.6
	I0919 16:53:51.024447   85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 16:53:51.049103   85253 command_runner.go:130] > 24.0.6
	I0919 16:53:51.050830   85253 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 16:53:51.050890   85253 main.go:141] libmachine: (multinode-415589) Calling .GetIP
	I0919 16:53:51.054068   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:51.054408   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:53:51.054450   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:53:51.054599   85253 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0919 16:53:51.058775   85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:53:51.071368   85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 16:53:51.071419   85253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 16:53:51.089091   85253 docker.go:636] Got preloaded images: 
	I0919 16:53:51.089111   85253 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0919 16:53:51.089173   85253 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 16:53:51.098069   85253 command_runner.go:139] > {"Repositories":{}}
	I0919 16:53:51.098209   85253 ssh_runner.go:195] Run: which lz4
	I0919 16:53:51.102191   85253 command_runner.go:130] > /usr/bin/lz4
	I0919 16:53:51.102218   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0919 16:53:51.102289   85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 16:53:51.106440   85253 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:53:51.106470   85253 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 16:53:51.106485   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422207204 bytes)
	I0919 16:53:52.757386   85253 docker.go:600] Took 1.655115 seconds to copy over tarball
	I0919 16:53:52.757462   85253 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 16:53:55.109451   85253 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.351953272s)
	I0919 16:53:55.109484   85253 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 16:53:55.147873   85253 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 16:53:55.157240   85253 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.2":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c":"sha256:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.2":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4":"sha256:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.2":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf":"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e
61df5900fa0bb0"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.2":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab":"sha256:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0919 16:53:55.157396   85253 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0919 16:53:55.174287   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:53:55.282401   85253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 16:53:58.428664   85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.146218249s)
	I0919 16:53:58.428786   85253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 16:53:58.453664   85253 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I0919 16:53:58.453686   85253 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I0919 16:53:58.453692   85253 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I0919 16:53:58.453702   85253 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I0919 16:53:58.453709   85253 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0919 16:53:58.453720   85253 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0919 16:53:58.453728   85253 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0919 16:53:58.453738   85253 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:53:58.453846   85253 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 16:53:58.453870   85253 cache_images.go:84] Images are preloaded, skipping loading
	I0919 16:53:58.453934   85253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 16:53:58.482968   85253 command_runner.go:130] > cgroupfs
	I0919 16:53:58.484069   85253 cni.go:84] Creating CNI manager for ""
	I0919 16:53:58.484083   85253 cni.go:136] 1 nodes found, recommending kindnet
	I0919 16:53:58.484102   85253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 16:53:58.484130   85253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.11 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-415589 NodeName:multinode-415589 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 16:53:58.484279   85253 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-415589"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 16:53:58.484375   85253 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-415589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 16:53:58.484440   85253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 16:53:58.494650   85253 command_runner.go:130] > kubeadm
	I0919 16:53:58.494675   85253 command_runner.go:130] > kubectl
	I0919 16:53:58.494681   85253 command_runner.go:130] > kubelet
	I0919 16:53:58.494708   85253 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 16:53:58.494792   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 16:53:58.504326   85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0919 16:53:58.520389   85253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 16:53:58.535724   85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0919 16:53:58.551227   85253 ssh_runner.go:195] Run: grep 192.168.50.11	control-plane.minikube.internal$ /etc/hosts
	I0919 16:53:58.554818   85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:53:58.565786   85253 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589 for IP: 192.168.50.11
	I0919 16:53:58.565811   85253 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.566310   85253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 16:53:58.566464   85253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 16:53:58.566554   85253 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key
	I0919 16:53:58.566598   85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt with IP's: []
	I0919 16:53:58.622220   85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt ...
	I0919 16:53:58.622252   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt: {Name:mk7ec29a810283c598a22f6552f2c706bdcbda66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.622443   85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key ...
	I0919 16:53:58.622457   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key: {Name:mk0d34c3af68693664488a90c719b9e5e36f6ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.622561   85253 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6
	I0919 16:53:58.622579   85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 with IP's: [192.168.50.11 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 16:53:58.831877   85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 ...
	I0919 16:53:58.831910   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6: {Name:mkb2c2ec3feeb95a530c3f5c703f0b1be4b37155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.832092   85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6 ...
	I0919 16:53:58.832108   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6: {Name:mkef0c4bc7ead672418d86c55797aef46d113dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.832205   85253 certs.go:337] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt.249cd0a6 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt
	I0919 16:53:58.832301   85253 certs.go:341] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key.249cd0a6 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key
	I0919 16:53:58.832373   85253 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key
	I0919 16:53:58.832394   85253 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt with IP's: []
	I0919 16:53:58.924169   85253 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt ...
	I0919 16:53:58.924199   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt: {Name:mk3637c165ef46259ddb4842eba5fdcf9d5a67da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.924381   85253 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key ...
	I0919 16:53:58.924396   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key: {Name:mkffc46297afcf14a50f349f0971a70fbc1459c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:53:58.924495   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 16:53:58.924518   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 16:53:58.924534   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 16:53:58.924550   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 16:53:58.924570   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 16:53:58.924589   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 16:53:58.924608   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 16:53:58.924628   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 16:53:58.924701   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 16:53:58.924750   85253 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 16:53:58.924768   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 16:53:58.924805   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 16:53:58.924839   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 16:53:58.924876   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 16:53:58.924932   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 16:53:58.924972   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /usr/share/ca-certificates/733972.pem
	I0919 16:53:58.924995   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:53:58.925013   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem -> /usr/share/ca-certificates/73397.pem
	I0919 16:53:58.925530   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 16:53:58.949383   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 16:53:58.971267   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 16:53:58.992601   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 16:53:59.014627   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 16:53:59.036680   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 16:53:59.059344   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 16:53:59.080960   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 16:53:59.102635   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 16:53:59.124135   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 16:53:59.145432   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 16:53:59.166779   85253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 16:53:59.182036   85253 ssh_runner.go:195] Run: openssl version
	I0919 16:53:59.187213   85253 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 16:53:59.187445   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 16:53:59.197576   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:53:59.202026   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:53:59.202049   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:53:59.202092   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:53:59.207183   85253 command_runner.go:130] > b5213941
	I0919 16:53:59.207418   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 16:53:59.217446   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 16:53:59.227431   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 16:53:59.231902   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 16:53:59.231925   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 16:53:59.231961   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 16:53:59.237159   85253 command_runner.go:130] > 51391683
	I0919 16:53:59.237232   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 16:53:59.247187   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 16:53:59.257230   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 16:53:59.261733   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 16:53:59.261816   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 16:53:59.261862   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 16:53:59.266903   85253 command_runner.go:130] > 3ec20f2e
	I0919 16:53:59.267073   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 16:53:59.277487   85253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 16:53:59.281460   85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:53:59.281795   85253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:53:59.281847   85253 kubeadm.go:404] StartCluster: {Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:53:59.281980   85253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 16:53:59.301062   85253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 16:53:59.310950   85253 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0919 16:53:59.310980   85253 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0919 16:53:59.310990   85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0919 16:53:59.311059   85253 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 16:53:59.320120   85253 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 16:53:59.329254   85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0919 16:53:59.329281   85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0919 16:53:59.329291   85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0919 16:53:59.329303   85253 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:53:59.329343   85253 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 16:53:59.329379   85253 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 16:53:59.669291   85253 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:53:59.669329   85253 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:54:11.198810   85253 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 16:54:11.198853   85253 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I0919 16:54:11.198911   85253 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 16:54:11.198922   85253 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 16:54:11.199013   85253 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:54:11.199020   85253 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 16:54:11.199112   85253 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:54:11.199122   85253 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 16:54:11.199219   85253 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:54:11.199239   85253 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 16:54:11.199335   85253 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:54:11.200988   85253 out.go:204]   - Generating certificates and keys ...
	I0919 16:54:11.199379   85253 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 16:54:11.201086   85253 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 16:54:11.201102   85253 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0919 16:54:11.201176   85253 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 16:54:11.201188   85253 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0919 16:54:11.201265   85253 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:54:11.201285   85253 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 16:54:11.201366   85253 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:54:11.201378   85253 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0919 16:54:11.201455   85253 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 16:54:11.201466   85253 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0919 16:54:11.201663   85253 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 16:54:11.201685   85253 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0919 16:54:11.201762   85253 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 16:54:11.201776   85253 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0919 16:54:11.201957   85253 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
	I0919 16:54:11.201969   85253 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
	I0919 16:54:11.202051   85253 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 16:54:11.202070   85253 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0919 16:54:11.202171   85253 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
	I0919 16:54:11.202185   85253 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-415589] and IPs [192.168.50.11 127.0.0.1 ::1]
	I0919 16:54:11.202249   85253 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:54:11.202260   85253 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 16:54:11.202315   85253 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:54:11.202326   85253 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 16:54:11.202394   85253 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 16:54:11.202403   85253 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0919 16:54:11.202450   85253 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:54:11.202471   85253 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 16:54:11.202549   85253 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:54:11.202564   85253 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 16:54:11.202623   85253 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:54:11.202634   85253 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 16:54:11.202729   85253 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:54:11.202741   85253 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 16:54:11.202820   85253 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:54:11.202833   85253 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 16:54:11.202928   85253 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:54:11.202937   85253 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 16:54:11.202990   85253 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:54:11.203005   85253 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 16:54:11.205690   85253 out.go:204]   - Booting up control plane ...
	I0919 16:54:11.205795   85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:54:11.205806   85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 16:54:11.205900   85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:54:11.205908   85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 16:54:11.206014   85253 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:54:11.206037   85253 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 16:54:11.206186   85253 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:54:11.206200   85253 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:54:11.206303   85253 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:54:11.206315   85253 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:54:11.206371   85253 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 16:54:11.206383   85253 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 16:54:11.206577   85253 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:54:11.206588   85253 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 16:54:11.206692   85253 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.505224 seconds
	I0919 16:54:11.206702   85253 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505224 seconds
	I0919 16:54:11.206841   85253 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:54:11.206856   85253 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 16:54:11.207005   85253 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:54:11.207015   85253 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 16:54:11.207089   85253 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:54:11.207100   85253 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 16:54:11.207331   85253 command_runner.go:130] > [mark-control-plane] Marking the node multinode-415589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 16:54:11.207342   85253 kubeadm.go:322] [mark-control-plane] Marking the node multinode-415589 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 16:54:11.207399   85253 command_runner.go:130] > [bootstrap-token] Using token: a9n71v.a9970pz0xsqn3fiz
	I0919 16:54:11.207409   85253 kubeadm.go:322] [bootstrap-token] Using token: a9n71v.a9970pz0xsqn3fiz
	I0919 16:54:11.209021   85253 out.go:204]   - Configuring RBAC rules ...
	I0919 16:54:11.209273   85253 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:54:11.209292   85253 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 16:54:11.209358   85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:54:11.209368   85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 16:54:11.209529   85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:54:11.209540   85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 16:54:11.209676   85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:54:11.209696   85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 16:54:11.209865   85253 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:54:11.209880   85253 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 16:54:11.209998   85253 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:54:11.210008   85253 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 16:54:11.210175   85253 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:54:11.210179   85253 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 16:54:11.210255   85253 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0919 16:54:11.210259   85253 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 16:54:11.210303   85253 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0919 16:54:11.210307   85253 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 16:54:11.210310   85253 kubeadm.go:322] 
	I0919 16:54:11.210372   85253 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0919 16:54:11.210381   85253 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 16:54:11.210392   85253 kubeadm.go:322] 
	I0919 16:54:11.210495   85253 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0919 16:54:11.210506   85253 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 16:54:11.210513   85253 kubeadm.go:322] 
	I0919 16:54:11.210551   85253 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0919 16:54:11.210559   85253 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 16:54:11.210637   85253 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:54:11.210649   85253 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 16:54:11.210768   85253 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:54:11.210783   85253 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 16:54:11.210794   85253 kubeadm.go:322] 
	I0919 16:54:11.210871   85253 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0919 16:54:11.210887   85253 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 16:54:11.210891   85253 kubeadm.go:322] 
	I0919 16:54:11.210957   85253 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 16:54:11.210972   85253 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 16:54:11.210987   85253 kubeadm.go:322] 
	I0919 16:54:11.211064   85253 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0919 16:54:11.211071   85253 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 16:54:11.211158   85253 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:54:11.211166   85253 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 16:54:11.211252   85253 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:54:11.211263   85253 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 16:54:11.211269   85253 kubeadm.go:322] 
	I0919 16:54:11.211388   85253 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:54:11.211400   85253 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 16:54:11.211503   85253 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0919 16:54:11.211514   85253 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 16:54:11.211520   85253 kubeadm.go:322] 
	I0919 16:54:11.211643   85253 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
	I0919 16:54:11.211652   85253 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
	I0919 16:54:11.211801   85253 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 \
	I0919 16:54:11.211812   85253 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 \
	I0919 16:54:11.211840   85253 command_runner.go:130] > 	--control-plane 
	I0919 16:54:11.211849   85253 kubeadm.go:322] 	--control-plane 
	I0919 16:54:11.211855   85253 kubeadm.go:322] 
	I0919 16:54:11.211963   85253 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:54:11.211973   85253 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 16:54:11.211989   85253 kubeadm.go:322] 
	I0919 16:54:11.212097   85253 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
	I0919 16:54:11.212109   85253 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a9n71v.a9970pz0xsqn3fiz \
	I0919 16:54:11.212234   85253 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 
	I0919 16:54:11.212252   85253 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 
	I0919 16:54:11.212260   85253 cni.go:84] Creating CNI manager for ""
	I0919 16:54:11.212268   85253 cni.go:136] 1 nodes found, recommending kindnet
	I0919 16:54:11.213988   85253 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 16:54:11.215465   85253 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 16:54:11.221692   85253 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 16:54:11.221716   85253 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 16:54:11.221725   85253 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 16:54:11.221733   85253 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 16:54:11.221743   85253 command_runner.go:130] > Access: 2023-09-19 16:53:37.309210321 +0000
	I0919 16:54:11.221755   85253 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 16:54:11.221764   85253 command_runner.go:130] > Change: 2023-09-19 16:53:35.557210321 +0000
	I0919 16:54:11.221771   85253 command_runner.go:130] >  Birth: -
	I0919 16:54:11.221879   85253 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 16:54:11.221898   85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 16:54:11.253438   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 16:54:12.404972   85253 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0919 16:54:12.411309   85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0919 16:54:12.420524   85253 command_runner.go:130] > serviceaccount/kindnet created
	I0919 16:54:12.433168   85253 command_runner.go:130] > daemonset.apps/kindnet created
	I0919 16:54:12.435928   85253 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.182450679s)
	I0919 16:54:12.435978   85253 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 16:54:12.436080   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:12.436096   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=multinode-415589 minikube.k8s.io/updated_at=2023_09_19T16_54_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:12.705419   85253 command_runner.go:130] > node/multinode-415589 labeled
	I0919 16:54:12.707175   85253 command_runner.go:130] > -16
	I0919 16:54:12.707206   85253 ops.go:34] apiserver oom_adj: -16
	I0919 16:54:12.707258   85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0919 16:54:12.707383   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:12.825175   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:12.827013   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:12.922384   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:13.424936   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:13.535895   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:13.924999   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:14.017877   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:14.424639   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:14.527560   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:14.925174   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:15.026461   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:15.425070   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:15.521662   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:15.925003   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:16.045288   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:16.424619   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:16.527388   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:16.924621   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:17.030516   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:17.424906   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:17.570848   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:17.925249   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:18.020664   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:18.424314   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:18.530310   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:18.924834   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:19.011504   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:19.424471   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:19.525979   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:19.925156   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:20.012117   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:20.424655   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:20.513423   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:20.924643   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:21.029491   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:21.424663   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:21.523197   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:21.924780   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:22.031995   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:22.424538   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:22.563143   85253 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0919 16:54:22.924656   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 16:54:23.050221   85253 command_runner.go:130] > NAME      SECRETS   AGE
	I0919 16:54:23.050251   85253 command_runner.go:130] > default   0         1s
	I0919 16:54:23.050283   85253 kubeadm.go:1081] duration metric: took 10.614274284s to wait for elevateKubeSystemPrivileges.
	I0919 16:54:23.050300   85253 kubeadm.go:406] StartCluster complete in 23.768456629s
	I0919 16:54:23.050322   85253 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:54:23.050401   85253 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:54:23.051523   85253 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:54:23.052392   85253 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:54:23.052569   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 16:54:23.052798   85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:54:23.052462   85253 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 16:54:23.052866   85253 addons.go:69] Setting storage-provisioner=true in profile "multinode-415589"
	I0919 16:54:23.052886   85253 addons.go:231] Setting addon storage-provisioner=true in "multinode-415589"
	I0919 16:54:23.052886   85253 addons.go:69] Setting default-storageclass=true in profile "multinode-415589"
	I0919 16:54:23.052910   85253 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-415589"
	I0919 16:54:23.052968   85253 host.go:66] Checking if "multinode-415589" exists ...
	I0919 16:54:23.052950   85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:54:23.053721   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:23.053724   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:23.053762   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:23.053782   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:23.054074   85253 cert_rotation.go:137] Starting client certificate rotation controller
	I0919 16:54:23.054475   85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:54:23.054493   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.054505   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.054514   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.066452   85253 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0919 16:54:23.066476   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.066486   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.066495   85253 round_trippers.go:580]     Audit-Id: 3fb3b891-e57d-408a-a790-659aa608d8f8
	I0919 16:54:23.066503   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.066518   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.066530   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.066538   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.066549   85253 round_trippers.go:580]     Content-Length: 291
	I0919 16:54:23.066582   85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"233","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:54:23.067102   85253 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"233","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:54:23.067176   85253 round_trippers.go:463] PUT https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:54:23.067191   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.067201   85253 round_trippers.go:473]     Content-Type: application/json
	I0919 16:54:23.067210   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.067225   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.069176   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0919 16:54:23.069496   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0919 16:54:23.069788   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:23.069991   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:23.070303   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:23.070326   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:23.070589   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:23.070612   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:23.070663   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:23.070874   85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 16:54:23.070964   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:23.071548   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:23.071596   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:23.073129   85253 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:54:23.073391   85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:54:23.073711   85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/storage.k8s.io/v1/storageclasses
	I0919 16:54:23.073723   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.073740   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.073751   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.081718   85253 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0919 16:54:23.081742   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.081754   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.081761   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.081770   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.081778   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.081785   85253 round_trippers.go:580]     Content-Length: 291
	I0919 16:54:23.081795   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.081803   85253 round_trippers.go:580]     Audit-Id: a47bac4e-528a-45e9-bf5b-1a488ee61e83
	I0919 16:54:23.082138   85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"314","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:54:23.082271   85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:54:23.082296   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.082308   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.082322   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.082630   85253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 16:54:23.082649   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.082658   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.082667   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.082677   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.082689   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.082697   85253 round_trippers.go:580]     Content-Length: 109
	I0919 16:54:23.082714   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.082726   85253 round_trippers.go:580]     Audit-Id: b7b49ea7-8771-4c2f-86c9-b5dbf1df04f7
	I0919 16:54:23.082748   85253 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"314"},"items":[]}
	I0919 16:54:23.083030   85253 addons.go:231] Setting addon default-storageclass=true in "multinode-415589"
	I0919 16:54:23.083074   85253 host.go:66] Checking if "multinode-415589" exists ...
	I0919 16:54:23.083444   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:23.083489   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:23.084530   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:23.084559   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.084570   85253 round_trippers.go:580]     Content-Length: 291
	I0919 16:54:23.084585   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.084602   85253 round_trippers.go:580]     Audit-Id: ef73916a-8c8f-4942-90e0-b4b220c44e45
	I0919 16:54:23.084611   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.084622   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.084630   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.084641   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.084667   85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"314","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0919 16:54:23.084764   85253 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-415589" context rescaled to 1 replicas
	I0919 16:54:23.084797   85253 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 16:54:23.087739   85253 out.go:177] * Verifying Kubernetes components...
	I0919 16:54:23.089185   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:54:23.087325   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35911
	I0919 16:54:23.089664   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:23.090200   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:23.090229   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:23.090630   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:23.090855   85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 16:54:23.092613   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:54:23.095622   85253 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 16:54:23.097249   85253 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:54:23.097268   85253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 16:54:23.097289   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:54:23.099014   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0919 16:54:23.099485   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:23.099942   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:23.099962   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:23.100309   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:23.100338   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:54:23.100817   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:54:23.100833   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:54:23.100908   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:23.100959   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:23.100990   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:54:23.101140   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:54:23.101272   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:54:23.101518   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:54:23.115209   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0919 16:54:23.115596   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:23.116057   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:23.116087   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:23.116464   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:23.116680   85253 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 16:54:23.118234   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:54:23.118471   85253 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 16:54:23.118486   85253 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 16:54:23.118504   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:54:23.121818   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:54:23.122267   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:54:23.122288   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:54:23.122446   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:54:23.122608   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:54:23.122799   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:54:23.122957   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:54:23.344565   85253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 16:54:23.436992   85253 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 16:54:23.571337   85253 command_runner.go:130] > apiVersion: v1
	I0919 16:54:23.571358   85253 command_runner.go:130] > data:
	I0919 16:54:23.571362   85253 command_runner.go:130] >   Corefile: |
	I0919 16:54:23.571366   85253 command_runner.go:130] >     .:53 {
	I0919 16:54:23.571370   85253 command_runner.go:130] >         errors
	I0919 16:54:23.571375   85253 command_runner.go:130] >         health {
	I0919 16:54:23.571380   85253 command_runner.go:130] >            lameduck 5s
	I0919 16:54:23.571384   85253 command_runner.go:130] >         }
	I0919 16:54:23.571389   85253 command_runner.go:130] >         ready
	I0919 16:54:23.571399   85253 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0919 16:54:23.571406   85253 command_runner.go:130] >            pods insecure
	I0919 16:54:23.571420   85253 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0919 16:54:23.571435   85253 command_runner.go:130] >            ttl 30
	I0919 16:54:23.571442   85253 command_runner.go:130] >         }
	I0919 16:54:23.571450   85253 command_runner.go:130] >         prometheus :9153
	I0919 16:54:23.571462   85253 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0919 16:54:23.571470   85253 command_runner.go:130] >            max_concurrent 1000
	I0919 16:54:23.571477   85253 command_runner.go:130] >         }
	I0919 16:54:23.571481   85253 command_runner.go:130] >         cache 30
	I0919 16:54:23.571485   85253 command_runner.go:130] >         loop
	I0919 16:54:23.571492   85253 command_runner.go:130] >         reload
	I0919 16:54:23.571501   85253 command_runner.go:130] >         loadbalance
	I0919 16:54:23.571511   85253 command_runner.go:130] >     }
	I0919 16:54:23.571518   85253 command_runner.go:130] > kind: ConfigMap
	I0919 16:54:23.571528   85253 command_runner.go:130] > metadata:
	I0919 16:54:23.571544   85253 command_runner.go:130] >   creationTimestamp: "2023-09-19T16:54:11Z"
	I0919 16:54:23.571554   85253 command_runner.go:130] >   name: coredns
	I0919 16:54:23.571565   85253 command_runner.go:130] >   namespace: kube-system
	I0919 16:54:23.571575   85253 command_runner.go:130] >   resourceVersion: "229"
	I0919 16:54:23.571587   85253 command_runner.go:130] >   uid: 0111cf12-53fa-4f83-8267-d0f1ad7aadd6
	I0919 16:54:23.571766   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 16:54:23.572139   85253 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:54:23.572463   85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:54:23.572798   85253 node_ready.go:35] waiting up to 6m0s for node "multinode-415589" to be "Ready" ...
	I0919 16:54:23.572883   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:23.572893   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.572906   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.572924   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.575029   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:23.575045   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.575051   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.575057   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.575062   85253 round_trippers.go:580]     Audit-Id: d0ebbafb-fef0-46cc-88e7-24e96c02d631
	I0919 16:54:23.575067   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.575072   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.575077   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.575284   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:23.575837   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:23.575851   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:23.575857   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:23.575864   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:23.578050   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:23.578065   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:23.578075   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:23.578084   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:23.578093   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:23.578102   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:23.578113   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:23 GMT
	I0919 16:54:23.578124   85253 round_trippers.go:580]     Audit-Id: 2b73ffab-9777-40c9-8846-a50585810aa1
	I0919 16:54:23.578320   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:24.078944   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:24.078969   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:24.078979   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:24.078990   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:24.083444   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:54:24.083466   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:24.083473   85253 round_trippers.go:580]     Audit-Id: af937ad5-0e36-48c3-a08a-c957bcca19ab
	I0919 16:54:24.083481   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:24.083490   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:24.083499   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:24.083507   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:24.083520   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:24 GMT
	I0919 16:54:24.083661   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:24.579659   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:24.579685   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:24.579697   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:24.579708   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:24.581929   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:24.581952   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:24.581963   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:24.581972   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:24 GMT
	I0919 16:54:24.581994   85253 round_trippers.go:580]     Audit-Id: fb770a18-7e10-4f64-895c-21eaac304460
	I0919 16:54:24.582006   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:24.582015   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:24.582025   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:24.582294   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:24.807796   85253 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0919 16:54:24.817012   85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0919 16:54:24.832410   85253 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0919 16:54:24.847786   85253 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0919 16:54:24.866141   85253 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0919 16:54:24.888064   85253 command_runner.go:130] > pod/storage-provisioner created
	I0919 16:54:24.903790   85253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.559180744s)
	I0919 16:54:24.903840   85253 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0919 16:54:24.903863   85253 main.go:141] libmachine: Making call to close driver server
	I0919 16:54:24.903867   85253 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.466846375s)
	I0919 16:54:24.903881   85253 main.go:141] libmachine: (multinode-415589) Calling .Close
	I0919 16:54:24.903901   85253 main.go:141] libmachine: Making call to close driver server
	I0919 16:54:24.903924   85253 main.go:141] libmachine: (multinode-415589) Calling .Close
	I0919 16:54:24.903907   85253 command_runner.go:130] > configmap/coredns replaced
	I0919 16:54:24.904044   85253 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.332247153s)
	I0919 16:54:24.904071   85253 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 16:54:24.904209   85253 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:54:24.904232   85253 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:54:24.904243   85253 main.go:141] libmachine: Making call to close driver server
	I0919 16:54:24.904243   85253 main.go:141] libmachine: (multinode-415589) DBG | Closing plugin on server side
	I0919 16:54:24.904252   85253 main.go:141] libmachine: (multinode-415589) Calling .Close
	I0919 16:54:24.904217   85253 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:54:24.904288   85253 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:54:24.904308   85253 main.go:141] libmachine: Making call to close driver server
	I0919 16:54:24.904324   85253 main.go:141] libmachine: (multinode-415589) Calling .Close
	I0919 16:54:24.904486   85253 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:54:24.904527   85253 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:54:24.904623   85253 main.go:141] libmachine: (multinode-415589) DBG | Closing plugin on server side
	I0919 16:54:24.904701   85253 main.go:141] libmachine: Making call to close driver server
	I0919 16:54:24.904719   85253 main.go:141] libmachine: (multinode-415589) Calling .Close
	I0919 16:54:24.904950   85253 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:54:24.904965   85253 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:54:24.906096   85253 main.go:141] libmachine: Successfully made call to close driver server
	I0919 16:54:24.906125   85253 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 16:54:24.908086   85253 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0919 16:54:24.909584   85253 addons.go:502] enable addons completed in 1.857135481s: enabled=[default-storageclass storage-provisioner]
	I0919 16:54:25.079433   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:25.079457   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:25.079465   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:25.079471   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:25.082174   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:25.082198   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:25.082206   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:25.082211   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:25.082217   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:25.082222   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:25 GMT
	I0919 16:54:25.082227   85253 round_trippers.go:580]     Audit-Id: eaec7cd0-19fb-4851-abab-2853e5170772
	I0919 16:54:25.082232   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:25.082617   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:25.579306   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:25.579336   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:25.579352   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:25.579361   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:25.581778   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:25.581798   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:25.581805   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:25.581810   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:25 GMT
	I0919 16:54:25.581816   85253 round_trippers.go:580]     Audit-Id: 54a24287-7988-4d7e-b14b-59143dbfae20
	I0919 16:54:25.581821   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:25.581826   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:25.581831   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:25.582219   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:25.582555   85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
	I0919 16:54:26.078882   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:26.078912   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:26.078921   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:26.078933   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:26.082036   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:26.082060   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:26.082071   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:26 GMT
	I0919 16:54:26.082080   85253 round_trippers.go:580]     Audit-Id: 31ae632d-0073-4887-a5d8-c9dc78573675
	I0919 16:54:26.082088   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:26.082094   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:26.082099   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:26.082104   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:26.082233   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:26.578863   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:26.578885   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:26.578893   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:26.578899   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:26.581484   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:26.581507   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:26.581518   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:26.581527   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:26.581536   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:26 GMT
	I0919 16:54:26.581542   85253 round_trippers.go:580]     Audit-Id: b16e8fbf-c2f2-4a76-a249-094835df55bf
	I0919 16:54:26.581547   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:26.581553   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:26.582036   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:27.079526   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:27.079550   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:27.079558   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:27.079564   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:27.082184   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:27.082205   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:27.082212   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:27.082217   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:27.082222   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:27.082228   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:27.082233   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:27 GMT
	I0919 16:54:27.082238   85253 round_trippers.go:580]     Audit-Id: 408ad81e-c299-4cc7-821e-f8635627a2e7
	I0919 16:54:27.082414   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:27.579079   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:27.579103   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:27.579118   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:27.579131   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:27.582001   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:27.582025   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:27.582033   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:27 GMT
	I0919 16:54:27.582038   85253 round_trippers.go:580]     Audit-Id: 7d0c435f-6907-48ae-b542-602f753109db
	I0919 16:54:27.582044   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:27.582049   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:27.582057   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:27.582062   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:27.582219   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:28.079021   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:28.079041   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:28.079056   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:28.079062   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:28.081690   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:28.081710   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:28.081720   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:28 GMT
	I0919 16:54:28.081728   85253 round_trippers.go:580]     Audit-Id: 1188f209-5793-4129-903c-7b1a39b3808a
	I0919 16:54:28.081737   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:28.081745   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:28.081758   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:28.081771   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:28.082330   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:28.082807   85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
	I0919 16:54:28.579510   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:28.579535   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:28.579547   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:28.579556   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:28.582180   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:28.582205   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:28.582215   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:28.582222   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:28.582227   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:28.582234   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:28.582242   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:28 GMT
	I0919 16:54:28.582249   85253 round_trippers.go:580]     Audit-Id: 377c82ef-a6ca-43c3-99ab-7cf8c1e74e23
	I0919 16:54:28.582813   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:29.079114   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:29.079140   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:29.079149   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:29.079156   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:29.082314   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:29.082343   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:29.082355   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:29.082365   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:29.082373   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:29 GMT
	I0919 16:54:29.082379   85253 round_trippers.go:580]     Audit-Id: 52b454b1-f632-4de3-afb5-6d60a8ce9a48
	I0919 16:54:29.082384   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:29.082390   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:29.082684   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:29.579312   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:29.579334   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:29.579342   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:29.579347   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:29.581990   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:29.582009   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:29.582016   85253 round_trippers.go:580]     Audit-Id: d6c5ab61-5407-458d-b823-0cfa3d6c387b
	I0919 16:54:29.582022   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:29.582028   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:29.582036   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:29.582045   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:29.582054   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:29 GMT
	I0919 16:54:29.582464   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:30.079091   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:30.079116   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:30.079125   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:30.079132   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:30.082136   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:30.082158   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:30.082166   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:30 GMT
	I0919 16:54:30.082177   85253 round_trippers.go:580]     Audit-Id: 8be1cd1b-8f98-442d-9c93-5eb12bb43a1a
	I0919 16:54:30.082182   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:30.082188   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:30.082193   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:30.082198   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:30.082632   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:30.082925   85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
	I0919 16:54:30.579333   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:30.579360   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:30.579373   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:30.579383   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:30.582381   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:30.582407   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:30.582417   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:30.582425   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:30 GMT
	I0919 16:54:30.582433   85253 round_trippers.go:580]     Audit-Id: 27ab3e63-7b66-4e76-a41c-f899f34f2400
	I0919 16:54:30.582441   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:30.582449   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:30.582460   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:30.583007   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:31.079729   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:31.079752   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:31.079761   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:31.079767   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:31.082734   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:31.082761   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:31.082771   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:31 GMT
	I0919 16:54:31.082778   85253 round_trippers.go:580]     Audit-Id: 55f1987f-797c-4780-8f49-8c8a3b5d5a84
	I0919 16:54:31.082787   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:31.082799   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:31.082806   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:31.082811   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:31.082999   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:31.579772   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:31.579804   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:31.579819   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:31.579828   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:31.583902   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:54:31.583923   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:31.583934   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:31.583941   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:31.583948   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:31.583955   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:31 GMT
	I0919 16:54:31.583963   85253 round_trippers.go:580]     Audit-Id: bce7e871-b64f-4aeb-8239-3182ddff19eb
	I0919 16:54:31.583973   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:31.584282   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:32.078932   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:32.078959   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:32.078967   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:32.078974   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:32.081964   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:32.081989   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:32.081999   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:32 GMT
	I0919 16:54:32.082006   85253 round_trippers.go:580]     Audit-Id: 282aa01b-6444-4cea-90a4-5311b329b4c8
	I0919 16:54:32.082013   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:32.082021   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:32.082028   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:32.082036   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:32.082352   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:32.579065   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:32.579091   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:32.579099   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:32.579105   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:32.581841   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:32.581863   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:32.581874   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:32.581881   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:32.581887   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:32.581895   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:32 GMT
	I0919 16:54:32.581902   85253 round_trippers.go:580]     Audit-Id: 6eb45b8a-87f1-4d09-8136-06e3fa57ffee
	I0919 16:54:32.581911   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:32.582129   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:32.582461   85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
	I0919 16:54:33.079239   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:33.079265   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:33.079273   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:33.079280   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:33.081960   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:33.081990   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:33.082000   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:33.082009   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:33 GMT
	I0919 16:54:33.082016   85253 round_trippers.go:580]     Audit-Id: 42048eca-92af-4241-b48a-5d619f95fda3
	I0919 16:54:33.082022   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:33.082033   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:33.082041   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:33.082633   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:33.579652   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:33.579677   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:33.579685   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:33.579692   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:33.582613   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:33.582645   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:33.582656   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:33.582664   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:33.582672   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:33.582679   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:33 GMT
	I0919 16:54:33.582684   85253 round_trippers.go:580]     Audit-Id: f7161931-e757-40ba-9c09-e36dae1ea406
	I0919 16:54:33.582689   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:33.582810   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:34.079390   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:34.079417   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:34.079425   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:34.079431   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:34.083942   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:54:34.083966   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:34.083975   85253 round_trippers.go:580]     Audit-Id: e4907963-6b2c-4f41-8dc2-76b1ec9a0d7c
	I0919 16:54:34.083982   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:34.083990   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:34.083998   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:34.084006   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:34.084014   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:34 GMT
	I0919 16:54:34.085255   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:34.578928   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:34.578953   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:34.578962   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:34.578968   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:34.581424   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:34.581448   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:34.581458   85253 round_trippers.go:580]     Audit-Id: 588c2d7a-e397-42dc-93d4-0b3e6813a309
	I0919 16:54:34.581467   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:34.581476   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:34.581487   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:34.581497   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:34.581505   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:34 GMT
	I0919 16:54:34.582079   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:35.079820   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:35.079848   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:35.079863   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:35.079872   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:35.082774   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:35.082799   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:35.082815   85253 round_trippers.go:580]     Audit-Id: 8e8bc260-e87b-4987-ae8d-88002347daad
	I0919 16:54:35.082823   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:35.082832   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:35.082840   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:35.082851   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:35.082860   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:35 GMT
	I0919 16:54:35.083149   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:35.083486   85253 node_ready.go:58] node "multinode-415589" has status "Ready":"False"
	I0919 16:54:35.578843   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:35.578868   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:35.578877   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:35.578882   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:35.581603   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:35.581632   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:35.581640   85253 round_trippers.go:580]     Audit-Id: 5edbbc8d-502f-4a6c-b7ab-5513f2f2d8d0
	I0919 16:54:35.581645   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:35.581650   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:35.581655   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:35.581660   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:35.581674   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:35 GMT
	I0919 16:54:35.582048   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"321","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 5036 chars]
	I0919 16:54:36.079755   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:36.079776   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.079784   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.079790   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.083100   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:36.083127   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.083139   85253 round_trippers.go:580]     Audit-Id: 6a68aebf-7a22-4516-ae15-d1414b4a173c
	I0919 16:54:36.083149   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.083157   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.083168   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.083178   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.083187   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.083358   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
	I0919 16:54:36.083705   85253 node_ready.go:49] node "multinode-415589" has status "Ready":"True"
	I0919 16:54:36.083721   85253 node_ready.go:38] duration metric: took 12.510902115s waiting for node "multinode-415589" to be "Ready" ...
	I0919 16:54:36.083730   85253 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:54:36.083829   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:54:36.083842   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.083853   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.083863   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.087213   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:36.087227   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.087234   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.087239   85253 round_trippers.go:580]     Audit-Id: d1bfd11f-4c54-44b8-b002-9e829ff3ef66
	I0919 16:54:36.087245   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.087250   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.087254   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.087260   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.089610   85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"393","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52583 chars]
	I0919 16:54:36.092461   85253 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:36.092529   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:36.092537   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.092544   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.092550   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.095488   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:36.095504   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.095511   85253 round_trippers.go:580]     Audit-Id: 718b961f-0e26-4bc8-9ac3-80cb3a3de233
	I0919 16:54:36.095516   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.095521   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.095526   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.095531   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.095538   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.095683   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"393","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4762 chars]
	I0919 16:54:36.096034   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:36.096044   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.096051   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.096056   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.097883   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:36.097898   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.097904   85253 round_trippers.go:580]     Audit-Id: 64e54a98-8b5e-46dd-87ee-85735c0daf8d
	I0919 16:54:36.097909   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.097914   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.097919   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.097923   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.097928   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.098091   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
	I0919 16:54:36.098533   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:36.098549   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.098559   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.098568   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.107823   85253 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0919 16:54:36.107840   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.107847   85253 round_trippers.go:580]     Audit-Id: 4b5248cc-edd6-4208-a977-e880d8144266
	I0919 16:54:36.107852   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.107858   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.107862   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.107870   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.107878   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.108011   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 16:54:36.108477   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:36.108491   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.108498   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.108505   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.114196   85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 16:54:36.114212   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.114220   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.114225   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.114230   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.114235   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.114241   85253 round_trippers.go:580]     Audit-Id: 094831fc-42f2-4433-8e48-5d75cd8242c2
	I0919 16:54:36.114248   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.114588   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
	I0919 16:54:36.615484   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:36.615530   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.615544   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.615552   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.618536   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:36.618557   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.618564   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.618570   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.618575   85253 round_trippers.go:580]     Audit-Id: 365dd419-91ce-4cf7-96c6-7990154ccda9
	I0919 16:54:36.618580   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.618586   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.618591   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.618800   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 16:54:36.619414   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:36.619437   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:36.619448   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:36.619457   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:36.621668   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:36.621681   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:36.621687   85253 round_trippers.go:580]     Audit-Id: f21b4c11-44e0-4b00-a2f3-291f1de2a4d3
	I0919 16:54:36.621692   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:36.621697   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:36.621702   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:36.621707   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:36.621713   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:36 GMT
	I0919 16:54:36.621933   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
	I0919 16:54:37.115654   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:37.115678   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:37.115686   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:37.115695   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:37.120257   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:54:37.120286   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:37.120298   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:37.120306   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:37.120315   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:37.120322   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:37.120331   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:37 GMT
	I0919 16:54:37.120340   85253 round_trippers.go:580]     Audit-Id: 48e29f15-3042-4a43-9b40-68ffd3961bf0
	I0919 16:54:37.120555   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 16:54:37.121050   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:37.121065   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:37.121073   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:37.121079   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:37.123200   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:37.123221   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:37.123230   85253 round_trippers.go:580]     Audit-Id: 17fac475-35f5-4c6a-82a9-7397eadbdc1e
	I0919 16:54:37.123239   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:37.123247   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:37.123255   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:37.123267   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:37.123278   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:37 GMT
	I0919 16:54:37.123510   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"391","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4902 chars]
	I0919 16:54:37.615184   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:37.615207   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:37.615215   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:37.615221   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:37.617757   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:37.617780   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:37.617790   85253 round_trippers.go:580]     Audit-Id: 31fd8e3e-7862-4124-960b-fadc32cfb060
	I0919 16:54:37.617798   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:37.617806   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:37.617814   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:37.617823   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:37.617838   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:37 GMT
	I0919 16:54:37.618051   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 16:54:37.618740   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:37.618761   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:37.618768   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:37.618774   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:37.620657   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:37.620674   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:37.620683   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:37.620691   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:37.620697   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:37 GMT
	I0919 16:54:37.620705   85253 round_trippers.go:580]     Audit-Id: 3256a8d7-1a8e-4d93-abfc-ec29d2085557
	I0919 16:54:37.620712   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:37.620724   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:37.621012   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.115730   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:38.115757   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.115765   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.115771   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.118667   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.118680   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.118700   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.118706   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.118711   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.118716   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.118722   85253 round_trippers.go:580]     Audit-Id: 0f533a70-b2fa-4c43-90ce-c9c29edae6e9
	I0919 16:54:38.118728   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.119194   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"397","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0919 16:54:38.119735   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.119751   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.119759   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.119764   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.121980   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.121992   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.121998   85253 round_trippers.go:580]     Audit-Id: fae37c84-b36f-4885-9fce-cc4cfc962e43
	I0919 16:54:38.122003   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.122008   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.122013   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.122018   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.122023   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.122401   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.122698   85253 pod_ready.go:102] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"False"
	I0919 16:54:38.615039   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:54:38.615064   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.615072   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.615078   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.617651   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.617670   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.617677   85253 round_trippers.go:580]     Audit-Id: 2ed5ac42-6de7-42cf-ac64-7eb14d20bdb4
	I0919 16:54:38.617682   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.617687   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.617692   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.617697   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.617702   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.618121   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0919 16:54:38.618618   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.618633   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.618641   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.618646   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.620800   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.620812   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.620817   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.620822   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.620827   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.620833   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.620838   85253 round_trippers.go:580]     Audit-Id: 2cf47788-16bc-4f31-926c-f530f19ac895
	I0919 16:54:38.620842   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.621097   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.621359   85253 pod_ready.go:92] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:38.621374   85253 pod_ready.go:81] duration metric: took 2.528893039s waiting for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.621382   85253 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.621428   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-415589
	I0919 16:54:38.621436   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.621442   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.621448   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.623182   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:38.623192   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.623200   85253 round_trippers.go:580]     Audit-Id: 5e3b578c-b0b1-46a9-9431-99415be92bb1
	I0919 16:54:38.623205   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.623210   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.623215   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.623220   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.623225   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.623612   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-415589","namespace":"kube-system","uid":"1dbf3be3-1373-453b-a745-575b7f604586","resourceVersion":"383","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.11:2379","kubernetes.io/config.hash":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.mirror":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.seen":"2023-09-19T16:54:11.230739231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0919 16:54:38.624077   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.624091   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.624098   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.624104   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.626064   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:38.626081   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.626090   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.626096   85253 round_trippers.go:580]     Audit-Id: cc024073-c26c-41e6-8936-337ab34d4a34
	I0919 16:54:38.626101   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.626107   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.626116   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.626125   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.626237   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.626554   85253 pod_ready.go:92] pod "etcd-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:38.626568   85253 pod_ready.go:81] duration metric: took 5.181196ms waiting for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.626579   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.626637   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-415589
	I0919 16:54:38.626648   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.626659   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.626667   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.628756   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.628770   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.628777   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.628782   85253 round_trippers.go:580]     Audit-Id: 2d9cca00-0ce0-4f34-ae0e-bf946911fabe
	I0919 16:54:38.628787   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.628792   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.628797   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.628802   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.629012   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-415589","namespace":"kube-system","uid":"4ecf615e-9f92-46f8-8b34-9de418bca0ac","resourceVersion":"384","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.11:8443","kubernetes.io/config.hash":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.mirror":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.seen":"2023-09-19T16:54:11.230732561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0919 16:54:38.629382   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.629395   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.629401   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.629407   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.631564   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.631584   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.631594   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.631603   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.631611   85253 round_trippers.go:580]     Audit-Id: a5e77ed4-59e2-4092-aaca-ecff6790196e
	I0919 16:54:38.631621   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.631634   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.631645   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.631775   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.632027   85253 pod_ready.go:92] pod "kube-apiserver-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:38.632041   85253 pod_ready.go:81] duration metric: took 5.455635ms waiting for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.632053   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.632098   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-415589
	I0919 16:54:38.632107   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.632117   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.632128   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.633909   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:38.633923   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.633931   85253 round_trippers.go:580]     Audit-Id: 24dbf25a-9b34-4832-a490-7b0ad821ce97
	I0919 16:54:38.633939   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.633947   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.633956   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.633973   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.633980   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.634158   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-415589","namespace":"kube-system","uid":"3b76511f-a4ea-484d-a0f7-6968c3abf350","resourceVersion":"385","creationTimestamp":"2023-09-19T16:54:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.mirror":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.seen":"2023-09-19T16:54:02.792831460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0919 16:54:38.634515   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.634527   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.634534   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.634542   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.636075   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:38.636092   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.636101   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.636110   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.636124   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.636138   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.636144   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.636149   85253 round_trippers.go:580]     Audit-Id: fd33917b-a4c4-4618-bdda-0f7d101290b3
	I0919 16:54:38.636468   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.636738   85253 pod_ready.go:92] pod "kube-controller-manager-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:38.636752   85253 pod_ready.go:81] duration metric: took 4.691897ms waiting for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.636760   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.680048   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
	I0919 16:54:38.680065   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.680073   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.680079   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.682274   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.682292   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.682301   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.682309   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.682316   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.682324   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.682333   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.682340   85253 round_trippers.go:580]     Audit-Id: 1a137c0c-2ac7-46fd-91ae-1dd2d9d99601
	I0919 16:54:38.683128   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6jtp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1f6a8f6-f608-4f79-9fd4-1a570bde14a6","resourceVersion":"376","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0919 16:54:38.880061   85253 request.go:629] Waited for 196.562901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.880122   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:38.880127   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:38.880147   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:38.880153   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:38.882999   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:38.883018   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:38.883025   85253 round_trippers.go:580]     Audit-Id: 907ec3af-7e5a-4499-a794-66f124d88879
	I0919 16:54:38.883031   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:38.883036   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:38.883041   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:38.883047   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:38.883052   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:38 GMT
	I0919 16:54:38.883274   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:38.883613   85253 pod_ready.go:92] pod "kube-proxy-r6jtp" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:38.883629   85253 pod_ready.go:81] duration metric: took 246.863276ms waiting for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:38.883639   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:39.079985   85253 request.go:629] Waited for 196.278198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
	I0919 16:54:39.080058   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
	I0919 16:54:39.080064   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.080071   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.080078   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.082827   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:39.082844   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.082850   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.082858   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.082867   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.082875   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.082886   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.082894   85253 round_trippers.go:580]     Audit-Id: 55f3aac3-fbc5-4c3a-a1e1-b724778bf564
	I0919 16:54:39.083072   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-415589","namespace":"kube-system","uid":"6f43b8d1-3b77-4df6-8b66-7d08cf7c0682","resourceVersion":"362","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.mirror":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.seen":"2023-09-19T16:54:11.230737337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0919 16:54:39.280803   85253 request.go:629] Waited for 197.345341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:39.280873   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:54:39.280878   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.280886   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.280891   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.283272   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:39.283292   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.283299   85253 round_trippers.go:580]     Audit-Id: e53f0dde-13b1-4337-8c79-6b26e0e1862c
	I0919 16:54:39.283304   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.283309   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.283317   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.283322   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.283329   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.283952   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0919 16:54:39.284237   85253 pod_ready.go:92] pod "kube-scheduler-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:54:39.284252   85253 pod_ready.go:81] duration metric: took 400.608428ms waiting for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:54:39.284262   85253 pod_ready.go:38] duration metric: took 3.200522207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:54:39.284281   85253 api_server.go:52] waiting for apiserver process to appear ...
	I0919 16:54:39.284327   85253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 16:54:39.297991   85253 command_runner.go:130] > 1917
	I0919 16:54:39.298423   85253 api_server.go:72] duration metric: took 16.213590475s to wait for apiserver process to appear ...
	I0919 16:54:39.298437   85253 api_server.go:88] waiting for apiserver healthz status ...
	I0919 16:54:39.298456   85253 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I0919 16:54:39.303494   85253 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I0919 16:54:39.303549   85253 round_trippers.go:463] GET https://192.168.50.11:8443/version
	I0919 16:54:39.303557   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.303565   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.303571   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.304741   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:54:39.304756   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.304762   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.304767   85253 round_trippers.go:580]     Audit-Id: c44e0f93-b99e-4aa0-a644-65bd8ca628c5
	I0919 16:54:39.304773   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.304781   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.304795   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.304804   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.304812   85253 round_trippers.go:580]     Content-Length: 263
	I0919 16:54:39.304828   85253 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0919 16:54:39.304891   85253 api_server.go:141] control plane version: v1.28.2
	I0919 16:54:39.304906   85253 api_server.go:131] duration metric: took 6.464103ms to wait for apiserver health ...
	I0919 16:54:39.304912   85253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 16:54:39.480329   85253 request.go:629] Waited for 175.327843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:54:39.480391   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:54:39.480396   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.480404   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.480410   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.483877   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:39.483891   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.483898   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.483904   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.483909   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.483914   85253 round_trippers.go:580]     Audit-Id: cea226cf-6caf-4ada-ae95-4dbf03735241
	I0919 16:54:39.483920   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.483925   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.485330   85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0919 16:54:39.487843   85253 system_pods.go:59] 8 kube-system pods found
	I0919 16:54:39.487877   85253 system_pods.go:61] "coredns-5dd5756b68-ctsv5" [d4fcd880-e2ad-4d44-a070-e2af114e5e38] Running
	I0919 16:54:39.487885   85253 system_pods.go:61] "etcd-multinode-415589" [1dbf3be3-1373-453b-a745-575b7f604586] Running
	I0919 16:54:39.487892   85253 system_pods.go:61] "kindnet-w9q5z" [39f88f25-8a6e-475c-8ef1-77c9d289fd48] Running
	I0919 16:54:39.487899   85253 system_pods.go:61] "kube-apiserver-multinode-415589" [4ecf615e-9f92-46f8-8b34-9de418bca0ac] Running
	I0919 16:54:39.487910   85253 system_pods.go:61] "kube-controller-manager-multinode-415589" [3b76511f-a4ea-484d-a0f7-6968c3abf350] Running
	I0919 16:54:39.487916   85253 system_pods.go:61] "kube-proxy-r6jtp" [a1f6a8f6-f608-4f79-9fd4-1a570bde14a6] Running
	I0919 16:54:39.487922   85253 system_pods.go:61] "kube-scheduler-multinode-415589" [6f43b8d1-3b77-4df6-8b66-7d08cf7c0682] Running
	I0919 16:54:39.487933   85253 system_pods.go:61] "storage-provisioner" [61db80e1-b248-49b3-aab0-4b70b4b47c51] Running
	I0919 16:54:39.487941   85253 system_pods.go:74] duration metric: took 183.022751ms to wait for pod list to return data ...
	I0919 16:54:39.487949   85253 default_sa.go:34] waiting for default service account to be created ...
	I0919 16:54:39.679920   85253 request.go:629] Waited for 191.878504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/default/serviceaccounts
	I0919 16:54:39.679987   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/default/serviceaccounts
	I0919 16:54:39.679993   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.680000   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.680006   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.683028   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:54:39.683047   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.683054   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.683059   85253 round_trippers.go:580]     Audit-Id: afea5b70-8aeb-46e3-aca9-6b193e268e6a
	I0919 16:54:39.683065   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.683070   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.683075   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.683080   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.683089   85253 round_trippers.go:580]     Content-Length: 261
	I0919 16:54:39.683110   85253 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f02b0790-b184-491e-88ab-fc300c097bfd","resourceVersion":"303","creationTimestamp":"2023-09-19T16:54:22Z"}}]}
	I0919 16:54:39.683392   85253 default_sa.go:45] found service account: "default"
	I0919 16:54:39.683417   85253 default_sa.go:55] duration metric: took 195.462471ms for default service account to be created ...
	I0919 16:54:39.683426   85253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 16:54:39.879812   85253 request.go:629] Waited for 196.308979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:54:39.879893   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:54:39.879898   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:39.879910   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:39.879917   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:39.884125   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:54:39.884151   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:39.884162   85253 round_trippers.go:580]     Audit-Id: 7033d54c-182b-4127-b32e-4b2b37e1441c
	I0919 16:54:39.884171   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:39.884179   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:39.884188   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:39.884196   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:39.884204   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:39 GMT
	I0919 16:54:39.885416   85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0919 16:54:39.887153   85253 system_pods.go:86] 8 kube-system pods found
	I0919 16:54:39.887174   85253 system_pods.go:89] "coredns-5dd5756b68-ctsv5" [d4fcd880-e2ad-4d44-a070-e2af114e5e38] Running
	I0919 16:54:39.887179   85253 system_pods.go:89] "etcd-multinode-415589" [1dbf3be3-1373-453b-a745-575b7f604586] Running
	I0919 16:54:39.887183   85253 system_pods.go:89] "kindnet-w9q5z" [39f88f25-8a6e-475c-8ef1-77c9d289fd48] Running
	I0919 16:54:39.887187   85253 system_pods.go:89] "kube-apiserver-multinode-415589" [4ecf615e-9f92-46f8-8b34-9de418bca0ac] Running
	I0919 16:54:39.887195   85253 system_pods.go:89] "kube-controller-manager-multinode-415589" [3b76511f-a4ea-484d-a0f7-6968c3abf350] Running
	I0919 16:54:39.887199   85253 system_pods.go:89] "kube-proxy-r6jtp" [a1f6a8f6-f608-4f79-9fd4-1a570bde14a6] Running
	I0919 16:54:39.887207   85253 system_pods.go:89] "kube-scheduler-multinode-415589" [6f43b8d1-3b77-4df6-8b66-7d08cf7c0682] Running
	I0919 16:54:39.887211   85253 system_pods.go:89] "storage-provisioner" [61db80e1-b248-49b3-aab0-4b70b4b47c51] Running
	I0919 16:54:39.887221   85253 system_pods.go:126] duration metric: took 203.789788ms to wait for k8s-apps to be running ...
	I0919 16:54:39.887230   85253 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 16:54:39.887282   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:54:39.903243   85253 system_svc.go:56] duration metric: took 16.000379ms WaitForService to wait for kubelet.
	I0919 16:54:39.903270   85253 kubeadm.go:581] duration metric: took 16.818444013s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 16:54:39.903291   85253 node_conditions.go:102] verifying NodePressure condition ...
	I0919 16:54:40.080803   85253 request.go:629] Waited for 177.36292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes
	I0919 16:54:40.080866   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes
	I0919 16:54:40.080871   85253 round_trippers.go:469] Request Headers:
	I0919 16:54:40.080879   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:54:40.080885   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:54:40.083787   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:54:40.083808   85253 round_trippers.go:577] Response Headers:
	I0919 16:54:40.083816   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:54:40 GMT
	I0919 16:54:40.083821   85253 round_trippers.go:580]     Audit-Id: 8632c30a-9fcb-4389-8d48-3a66b388a4d3
	I0919 16:54:40.083826   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:54:40.083831   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:54:40.083836   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:54:40.083841   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:54:40.084014   85253 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"403","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0919 16:54:40.084498   85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 16:54:40.084523   85253 node_conditions.go:123] node cpu capacity is 2
	I0919 16:54:40.084535   85253 node_conditions.go:105] duration metric: took 181.241026ms to run NodePressure ...
	I0919 16:54:40.084547   85253 start.go:228] waiting for startup goroutines ...
	I0919 16:54:40.084554   85253 start.go:233] waiting for cluster config update ...
	I0919 16:54:40.084566   85253 start.go:242] writing updated cluster config ...
	I0919 16:54:40.086745   85253 out.go:177] 
	I0919 16:54:40.088445   85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:54:40.088521   85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:54:40.090222   85253 out.go:177] * Starting worker node multinode-415589-m02 in cluster multinode-415589
	I0919 16:54:40.091409   85253 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 16:54:40.091436   85253 cache.go:57] Caching tarball of preloaded images
	I0919 16:54:40.091547   85253 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 16:54:40.091559   85253 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 16:54:40.091626   85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:54:40.091828   85253 start.go:365] acquiring machines lock for multinode-415589-m02: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 16:54:40.091874   85253 start.go:369] acquired machines lock for "multinode-415589-m02" in 26.599µs
	I0919 16:54:40.091892   85253 start.go:93] Provisioning new machine with config: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReque
sted:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0919 16:54:40.091957   85253 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0919 16:54:40.093606   85253 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0919 16:54:40.093699   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:54:40.093737   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:54:40.108106   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44393
	I0919 16:54:40.108537   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:54:40.109026   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:54:40.109054   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:54:40.109446   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:54:40.109641   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
	I0919 16:54:40.109802   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:54:40.109950   85253 start.go:159] libmachine.API.Create for "multinode-415589" (driver="kvm2")
	I0919 16:54:40.109983   85253 client.go:168] LocalClient.Create starting
	I0919 16:54:40.110028   85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem
	I0919 16:54:40.110060   85253 main.go:141] libmachine: Decoding PEM data...
	I0919 16:54:40.110080   85253 main.go:141] libmachine: Parsing certificate...
	I0919 16:54:40.110133   85253 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem
	I0919 16:54:40.110152   85253 main.go:141] libmachine: Decoding PEM data...
	I0919 16:54:40.110164   85253 main.go:141] libmachine: Parsing certificate...
	I0919 16:54:40.110181   85253 main.go:141] libmachine: Running pre-create checks...
	I0919 16:54:40.110190   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .PreCreateCheck
	I0919 16:54:40.110340   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
	I0919 16:54:40.110707   85253 main.go:141] libmachine: Creating machine...
	I0919 16:54:40.110721   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .Create
	I0919 16:54:40.110867   85253 main.go:141] libmachine: (multinode-415589-m02) Creating KVM machine...
	I0919 16:54:40.112165   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found existing default KVM network
	I0919 16:54:40.112351   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found existing private KVM network mk-multinode-415589
	I0919 16:54:40.112469   85253 main.go:141] libmachine: (multinode-415589-m02) Setting up store path in /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 ...
	I0919 16:54:40.112500   85253 main.go:141] libmachine: (multinode-415589-m02) Building disk image from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:54:40.112551   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.112446   85623 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:54:40.112683   85253 main.go:141] libmachine: (multinode-415589-m02) Downloading /home/jenkins/minikube-integration/17240-65689/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 16:54:40.329687   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.329515   85623 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa...
	I0919 16:54:40.643644   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.643501   85623 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/multinode-415589-m02.rawdisk...
	I0919 16:54:40.643674   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Writing magic tar header
	I0919 16:54:40.643686   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Writing SSH key tar header
	I0919 16:54:40.643695   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:40.643608   85623 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 ...
	I0919 16:54:40.643708   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02
	I0919 16:54:40.643836   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines
	I0919 16:54:40.643871   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02 (perms=drwx------)
	I0919 16:54:40.643884   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:54:40.643900   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689
	I0919 16:54:40.643913   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 16:54:40.643926   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home/jenkins
	I0919 16:54:40.643938   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Checking permissions on dir: /home
	I0919 16:54:40.643953   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Skipping /home - not owner
	I0919 16:54:40.643969   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines (perms=drwxr-xr-x)
	I0919 16:54:40.643985   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube (perms=drwxr-xr-x)
	I0919 16:54:40.643995   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689 (perms=drwxrwxr-x)
	I0919 16:54:40.644008   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 16:54:40.644021   85253 main.go:141] libmachine: (multinode-415589-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 16:54:40.644038   85253 main.go:141] libmachine: (multinode-415589-m02) Creating domain...
	I0919 16:54:40.644874   85253 main.go:141] libmachine: (multinode-415589-m02) define libvirt domain using xml: 
	I0919 16:54:40.644898   85253 main.go:141] libmachine: (multinode-415589-m02) <domain type='kvm'>
	I0919 16:54:40.644912   85253 main.go:141] libmachine: (multinode-415589-m02)   <name>multinode-415589-m02</name>
	I0919 16:54:40.644926   85253 main.go:141] libmachine: (multinode-415589-m02)   <memory unit='MiB'>2200</memory>
	I0919 16:54:40.644937   85253 main.go:141] libmachine: (multinode-415589-m02)   <vcpu>2</vcpu>
	I0919 16:54:40.644949   85253 main.go:141] libmachine: (multinode-415589-m02)   <features>
	I0919 16:54:40.644968   85253 main.go:141] libmachine: (multinode-415589-m02)     <acpi/>
	I0919 16:54:40.644985   85253 main.go:141] libmachine: (multinode-415589-m02)     <apic/>
	I0919 16:54:40.644999   85253 main.go:141] libmachine: (multinode-415589-m02)     <pae/>
	I0919 16:54:40.645011   85253 main.go:141] libmachine: (multinode-415589-m02)     
	I0919 16:54:40.645025   85253 main.go:141] libmachine: (multinode-415589-m02)   </features>
	I0919 16:54:40.645035   85253 main.go:141] libmachine: (multinode-415589-m02)   <cpu mode='host-passthrough'>
	I0919 16:54:40.645048   85253 main.go:141] libmachine: (multinode-415589-m02)   
	I0919 16:54:40.645063   85253 main.go:141] libmachine: (multinode-415589-m02)   </cpu>
	I0919 16:54:40.645077   85253 main.go:141] libmachine: (multinode-415589-m02)   <os>
	I0919 16:54:40.645090   85253 main.go:141] libmachine: (multinode-415589-m02)     <type>hvm</type>
	I0919 16:54:40.645103   85253 main.go:141] libmachine: (multinode-415589-m02)     <boot dev='cdrom'/>
	I0919 16:54:40.645116   85253 main.go:141] libmachine: (multinode-415589-m02)     <boot dev='hd'/>
	I0919 16:54:40.645130   85253 main.go:141] libmachine: (multinode-415589-m02)     <bootmenu enable='no'/>
	I0919 16:54:40.645145   85253 main.go:141] libmachine: (multinode-415589-m02)   </os>
	I0919 16:54:40.645159   85253 main.go:141] libmachine: (multinode-415589-m02)   <devices>
	I0919 16:54:40.645172   85253 main.go:141] libmachine: (multinode-415589-m02)     <disk type='file' device='cdrom'>
	I0919 16:54:40.645193   85253 main.go:141] libmachine: (multinode-415589-m02)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/boot2docker.iso'/>
	I0919 16:54:40.645206   85253 main.go:141] libmachine: (multinode-415589-m02)       <target dev='hdc' bus='scsi'/>
	I0919 16:54:40.645220   85253 main.go:141] libmachine: (multinode-415589-m02)       <readonly/>
	I0919 16:54:40.645230   85253 main.go:141] libmachine: (multinode-415589-m02)     </disk>
	I0919 16:54:40.645240   85253 main.go:141] libmachine: (multinode-415589-m02)     <disk type='file' device='disk'>
	I0919 16:54:40.645258   85253 main.go:141] libmachine: (multinode-415589-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 16:54:40.645275   85253 main.go:141] libmachine: (multinode-415589-m02)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/multinode-415589-m02.rawdisk'/>
	I0919 16:54:40.645284   85253 main.go:141] libmachine: (multinode-415589-m02)       <target dev='hda' bus='virtio'/>
	I0919 16:54:40.645292   85253 main.go:141] libmachine: (multinode-415589-m02)     </disk>
	I0919 16:54:40.645298   85253 main.go:141] libmachine: (multinode-415589-m02)     <interface type='network'>
	I0919 16:54:40.645303   85253 main.go:141] libmachine: (multinode-415589-m02)       <source network='mk-multinode-415589'/>
	I0919 16:54:40.645309   85253 main.go:141] libmachine: (multinode-415589-m02)       <model type='virtio'/>
	I0919 16:54:40.645314   85253 main.go:141] libmachine: (multinode-415589-m02)     </interface>
	I0919 16:54:40.645321   85253 main.go:141] libmachine: (multinode-415589-m02)     <interface type='network'>
	I0919 16:54:40.645326   85253 main.go:141] libmachine: (multinode-415589-m02)       <source network='default'/>
	I0919 16:54:40.645332   85253 main.go:141] libmachine: (multinode-415589-m02)       <model type='virtio'/>
	I0919 16:54:40.645339   85253 main.go:141] libmachine: (multinode-415589-m02)     </interface>
	I0919 16:54:40.645349   85253 main.go:141] libmachine: (multinode-415589-m02)     <serial type='pty'>
	I0919 16:54:40.645358   85253 main.go:141] libmachine: (multinode-415589-m02)       <target port='0'/>
	I0919 16:54:40.645372   85253 main.go:141] libmachine: (multinode-415589-m02)     </serial>
	I0919 16:54:40.645384   85253 main.go:141] libmachine: (multinode-415589-m02)     <console type='pty'>
	I0919 16:54:40.645398   85253 main.go:141] libmachine: (multinode-415589-m02)       <target type='serial' port='0'/>
	I0919 16:54:40.645414   85253 main.go:141] libmachine: (multinode-415589-m02)     </console>
	I0919 16:54:40.645431   85253 main.go:141] libmachine: (multinode-415589-m02)     <rng model='virtio'>
	I0919 16:54:40.645444   85253 main.go:141] libmachine: (multinode-415589-m02)       <backend model='random'>/dev/random</backend>
	I0919 16:54:40.645458   85253 main.go:141] libmachine: (multinode-415589-m02)     </rng>
	I0919 16:54:40.645470   85253 main.go:141] libmachine: (multinode-415589-m02)     
	I0919 16:54:40.645536   85253 main.go:141] libmachine: (multinode-415589-m02)     
	I0919 16:54:40.645562   85253 main.go:141] libmachine: (multinode-415589-m02)   </devices>
	I0919 16:54:40.645574   85253 main.go:141] libmachine: (multinode-415589-m02) </domain>
	I0919 16:54:40.645583   85253 main.go:141] libmachine: (multinode-415589-m02) 
	I0919 16:54:40.652507   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:03:87:04 in network default
	I0919 16:54:40.652977   85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring networks are active...
	I0919 16:54:40.652999   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:40.653763   85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring network default is active
	I0919 16:54:40.654148   85253 main.go:141] libmachine: (multinode-415589-m02) Ensuring network mk-multinode-415589 is active
	I0919 16:54:40.654518   85253 main.go:141] libmachine: (multinode-415589-m02) Getting domain xml...
	I0919 16:54:40.655370   85253 main.go:141] libmachine: (multinode-415589-m02) Creating domain...
	I0919 16:54:41.874765   85253 main.go:141] libmachine: (multinode-415589-m02) Waiting to get IP...
	I0919 16:54:41.875680   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:41.876077   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:41.876100   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:41.876051   85623 retry.go:31] will retry after 197.512955ms: waiting for machine to come up
	I0919 16:54:42.075574   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:42.075998   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:42.076029   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.075937   85623 retry.go:31] will retry after 386.1773ms: waiting for machine to come up
	I0919 16:54:42.463825   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:42.464318   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:42.464354   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.464267   85623 retry.go:31] will retry after 394.663206ms: waiting for machine to come up
	I0919 16:54:42.860862   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:42.861239   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:42.861275   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:42.861190   85623 retry.go:31] will retry after 474.519775ms: waiting for machine to come up
	I0919 16:54:43.337444   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:43.337896   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:43.337930   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:43.337846   85623 retry.go:31] will retry after 572.54958ms: waiting for machine to come up
	I0919 16:54:43.911505   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:43.911975   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:43.912001   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:43.911910   85623 retry.go:31] will retry after 839.255424ms: waiting for machine to come up
	I0919 16:54:44.753032   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:44.753477   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:44.753506   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:44.753376   85623 retry.go:31] will retry after 1.021339087s: waiting for machine to come up
	I0919 16:54:45.776541   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:45.776938   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:45.776973   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:45.776877   85623 retry.go:31] will retry after 1.408623312s: waiting for machine to come up
	I0919 16:54:47.186977   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:47.187413   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:47.187447   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:47.187356   85623 retry.go:31] will retry after 1.375668679s: waiting for machine to come up
	I0919 16:54:48.564941   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:48.565355   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:48.565387   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:48.565295   85623 retry.go:31] will retry after 2.222435737s: waiting for machine to come up
	I0919 16:54:50.789090   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:50.789653   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:50.789692   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:50.789578   85623 retry.go:31] will retry after 2.067069722s: waiting for machine to come up
	I0919 16:54:52.859900   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:52.860393   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:52.860424   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:52.860343   85623 retry.go:31] will retry after 3.562421103s: waiting for machine to come up
	I0919 16:54:56.424446   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:56.424822   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:56.424854   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:56.424772   85623 retry.go:31] will retry after 3.449099167s: waiting for machine to come up
	I0919 16:54:59.874985   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:54:59.875322   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find current IP address of domain multinode-415589-m02 in network mk-multinode-415589
	I0919 16:54:59.875354   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | I0919 16:54:59.875267   85623 retry.go:31] will retry after 5.18201167s: waiting for machine to come up
	I0919 16:55:05.058472   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.058890   85253 main.go:141] libmachine: (multinode-415589-m02) Found IP for machine: 192.168.50.170
	I0919 16:55:05.058918   85253 main.go:141] libmachine: (multinode-415589-m02) Reserving static IP address...
	I0919 16:55:05.058937   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has current primary IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.059340   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | unable to find host DHCP lease matching {name: "multinode-415589-m02", mac: "52:54:00:33:e7:29", ip: "192.168.50.170"} in network mk-multinode-415589
	I0919 16:55:05.132559   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Getting to WaitForSSH function...
	I0919 16:55:05.132596   85253 main.go:141] libmachine: (multinode-415589-m02) Reserved static IP address: 192.168.50.170
	I0919 16:55:05.132613   85253 main.go:141] libmachine: (multinode-415589-m02) Waiting for SSH to be available...
	I0919 16:55:05.135279   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.135819   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.135846   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.136243   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using SSH client type: external
	I0919 16:55:05.136281   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa (-rw-------)
	I0919 16:55:05.136333   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 16:55:05.136397   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | About to run SSH command:
	I0919 16:55:05.136421   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | exit 0
	I0919 16:55:05.233464   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | SSH cmd err, output: <nil>: 
	I0919 16:55:05.233738   85253 main.go:141] libmachine: (multinode-415589-m02) KVM machine creation complete!
	I0919 16:55:05.234078   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
	I0919 16:55:05.234608   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:05.234845   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:05.235038   85253 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 16:55:05.235058   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetState
	I0919 16:55:05.236255   85253 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 16:55:05.236273   85253 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 16:55:05.236283   85253 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 16:55:05.236293   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.238813   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.239103   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.239144   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.239370   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:05.239547   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.239714   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.239879   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:05.240031   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:05.240419   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:05.240431   85253 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 16:55:05.368698   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:55:05.368732   85253 main.go:141] libmachine: Detecting the provisioner...
	I0919 16:55:05.368745   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.371455   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.371842   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.371866   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.372002   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:05.372203   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.372347   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.372512   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:05.372681   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:05.373151   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:05.373171   85253 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 16:55:05.506724   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 16:55:05.506783   85253 main.go:141] libmachine: found compatible host: buildroot
	I0919 16:55:05.506791   85253 main.go:141] libmachine: Provisioning with buildroot...
	I0919 16:55:05.506801   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
	I0919 16:55:05.507116   85253 buildroot.go:166] provisioning hostname "multinode-415589-m02"
	I0919 16:55:05.507141   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
	I0919 16:55:05.507400   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.510018   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.510363   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.510397   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.510517   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:05.510735   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.510941   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.511107   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:05.511316   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:05.511620   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:05.511633   85253 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-415589-m02 && echo "multinode-415589-m02" | sudo tee /etc/hostname
	I0919 16:55:05.659902   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-415589-m02
	
	I0919 16:55:05.659931   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.663115   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.663485   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.663550   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.663700   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:05.663910   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.664061   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.664155   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:05.664348   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:05.664955   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:05.664993   85253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-415589-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-415589-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-415589-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 16:55:05.806167   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 16:55:05.806204   85253 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 16:55:05.806225   85253 buildroot.go:174] setting up certificates
	I0919 16:55:05.806233   85253 provision.go:83] configureAuth start
	I0919 16:55:05.806242   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetMachineName
	I0919 16:55:05.806556   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
	I0919 16:55:05.808915   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.809245   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.809272   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.809424   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.811418   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.811864   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.811896   85253 provision.go:138] copyHostCerts
	I0919 16:55:05.811905   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.811927   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 16:55:05.811968   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 16:55:05.811982   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 16:55:05.812052   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 16:55:05.812145   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 16:55:05.812162   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 16:55:05.812169   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 16:55:05.812194   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 16:55:05.812238   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 16:55:05.812260   85253 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 16:55:05.812267   85253 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 16:55:05.812289   85253 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 16:55:05.812333   85253 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.multinode-415589-m02 san=[192.168.50.170 192.168.50.170 localhost 127.0.0.1 minikube multinode-415589-m02]
	I0919 16:55:05.959052   85253 provision.go:172] copyRemoteCerts
	I0919 16:55:05.959128   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 16:55:05.959161   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:05.961903   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.962259   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:05.962297   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:05.962477   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:05.962680   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:05.962883   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:05.963072   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
	I0919 16:55:06.058846   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 16:55:06.058913   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 16:55:06.082850   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 16:55:06.082914   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 16:55:06.106828   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 16:55:06.106896   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0919 16:55:06.131071   85253 provision.go:86] duration metric: configureAuth took 324.825149ms
	I0919 16:55:06.131098   85253 buildroot.go:189] setting minikube options for container-runtime
	I0919 16:55:06.131282   85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:55:06.131308   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:06.131618   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:06.133954   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.134405   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:06.134439   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.134616   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:06.134820   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.134976   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.135126   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:06.135352   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:06.135889   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:06.135912   85253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 16:55:06.267202   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 16:55:06.267230   85253 buildroot.go:70] root file system type: tmpfs
	I0919 16:55:06.267347   85253 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 16:55:06.267364   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:06.270085   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.270516   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:06.270549   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.270700   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:06.270896   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.271062   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.271216   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:06.271392   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:06.271698   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:06.271758   85253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.50.11"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 16:55:06.414557   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.50.11
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 16:55:06.414600   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:06.417331   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.417735   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:06.417771   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:06.417971   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:06.418169   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.418364   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:06.418548   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:06.418708   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:06.419030   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:06.419058   85253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 16:55:07.260954   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 16:55:07.260993   85253 main.go:141] libmachine: Checking connection to Docker...
	I0919 16:55:07.261007   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetURL
	I0919 16:55:07.262442   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | Using libvirt version 6000000
	I0919 16:55:07.264964   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.265364   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.265398   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.265602   85253 main.go:141] libmachine: Docker is up and running!
	I0919 16:55:07.265631   85253 main.go:141] libmachine: Reticulating splines...
	I0919 16:55:07.265640   85253 client.go:171] LocalClient.Create took 27.15564589s
	I0919 16:55:07.265670   85253 start.go:167] duration metric: libmachine.API.Create for "multinode-415589" took 27.155721608s
	I0919 16:55:07.265682   85253 start.go:300] post-start starting for "multinode-415589-m02" (driver="kvm2")
	I0919 16:55:07.265698   85253 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 16:55:07.265718   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:07.265980   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 16:55:07.266012   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:07.268539   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.268971   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.269003   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.269164   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:07.269338   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:07.269516   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:07.269679   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
	I0919 16:55:07.363688   85253 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 16:55:07.367667   85253 command_runner.go:130] > NAME=Buildroot
	I0919 16:55:07.367687   85253 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I0919 16:55:07.367693   85253 command_runner.go:130] > ID=buildroot
	I0919 16:55:07.367702   85253 command_runner.go:130] > VERSION_ID=2021.02.12
	I0919 16:55:07.367708   85253 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0919 16:55:07.367741   85253 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 16:55:07.367761   85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 16:55:07.367825   85253 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 16:55:07.367914   85253 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 16:55:07.367925   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /etc/ssl/certs/733972.pem
	I0919 16:55:07.368022   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 16:55:07.376930   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 16:55:07.397976   85253 start.go:303] post-start completed in 132.278721ms
	I0919 16:55:07.398033   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetConfigRaw
	I0919 16:55:07.398721   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
	I0919 16:55:07.401557   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.401919   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.401957   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.402230   85253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/config.json ...
	I0919 16:55:07.402471   85253 start.go:128] duration metric: createHost completed in 27.310501904s
	I0919 16:55:07.402501   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:07.404785   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.405072   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.405104   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.405260   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:07.405468   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:07.405653   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:07.405820   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:07.405986   85253 main.go:141] libmachine: Using SSH client type: native
	I0919 16:55:07.406434   85253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.170 22 <nil> <nil>}
	I0919 16:55:07.406450   85253 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 16:55:07.538368   85253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695142507.524227457
	
	I0919 16:55:07.538410   85253 fix.go:206] guest clock: 1695142507.524227457
	I0919 16:55:07.538421   85253 fix.go:219] Guest: 2023-09-19 16:55:07.524227457 +0000 UTC Remote: 2023-09-19 16:55:07.402485729 +0000 UTC m=+103.718930288 (delta=121.741728ms)
	I0919 16:55:07.538443   85253 fix.go:190] guest clock delta is within tolerance: 121.741728ms
	I0919 16:55:07.538451   85253 start.go:83] releasing machines lock for "multinode-415589-m02", held for 27.446568134s
	I0919 16:55:07.538484   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:07.538804   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
	I0919 16:55:07.541365   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.541741   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.541779   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.544321   85253 out.go:177] * Found network options:
	I0919 16:55:07.545944   85253 out.go:177]   - NO_PROXY=192.168.50.11
	W0919 16:55:07.547076   85253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 16:55:07.547118   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:07.547619   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:07.547803   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:55:07.547934   85253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 16:55:07.547979   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	W0919 16:55:07.547996   85253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 16:55:07.548089   85253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 16:55:07.548113   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:55:07.550541   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.550853   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.550915   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.550954   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.551006   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:07.551207   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:07.551248   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:07.551307   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:07.551398   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:07.551420   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:55:07.551603   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:55:07.551621   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
	I0919 16:55:07.551759   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:55:07.551907   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
	I0919 16:55:07.644583   85253 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0919 16:55:07.644666   85253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 16:55:07.644741   85253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 16:55:07.672537   85253 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0919 16:55:07.673423   85253 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0919 16:55:07.673451   85253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 16:55:07.673465   85253 start.go:469] detecting cgroup driver to use...
	I0919 16:55:07.673588   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:55:07.690687   85253 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0919 16:55:07.690788   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 16:55:07.699765   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 16:55:07.709776   85253 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 16:55:07.709833   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 16:55:07.719930   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:55:07.730515   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 16:55:07.741709   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 16:55:07.752639   85253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 16:55:07.762848   85253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 16:55:07.773028   85253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 16:55:07.782140   85253 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0919 16:55:07.782227   85253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 16:55:07.791096   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:55:07.909109   85253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 16:55:07.927205   85253 start.go:469] detecting cgroup driver to use...
	I0919 16:55:07.927293   85253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 16:55:07.944704   85253 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0919 16:55:07.944767   85253 command_runner.go:130] > [Unit]
	I0919 16:55:07.944777   85253 command_runner.go:130] > Description=Docker Application Container Engine
	I0919 16:55:07.944782   85253 command_runner.go:130] > Documentation=https://docs.docker.com
	I0919 16:55:07.944796   85253 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0919 16:55:07.944805   85253 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0919 16:55:07.944815   85253 command_runner.go:130] > StartLimitBurst=3
	I0919 16:55:07.944823   85253 command_runner.go:130] > StartLimitIntervalSec=60
	I0919 16:55:07.944830   85253 command_runner.go:130] > [Service]
	I0919 16:55:07.944835   85253 command_runner.go:130] > Type=notify
	I0919 16:55:07.944840   85253 command_runner.go:130] > Restart=on-failure
	I0919 16:55:07.944845   85253 command_runner.go:130] > Environment=NO_PROXY=192.168.50.11
	I0919 16:55:07.944852   85253 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0919 16:55:07.944863   85253 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0919 16:55:07.944870   85253 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0919 16:55:07.944877   85253 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0919 16:55:07.944887   85253 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0919 16:55:07.944897   85253 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0919 16:55:07.944904   85253 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0919 16:55:07.944915   85253 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0919 16:55:07.944922   85253 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0919 16:55:07.944926   85253 command_runner.go:130] > ExecStart=
	I0919 16:55:07.944941   85253 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0919 16:55:07.944952   85253 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0919 16:55:07.944959   85253 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0919 16:55:07.944965   85253 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0919 16:55:07.944971   85253 command_runner.go:130] > LimitNOFILE=infinity
	I0919 16:55:07.944975   85253 command_runner.go:130] > LimitNPROC=infinity
	I0919 16:55:07.944981   85253 command_runner.go:130] > LimitCORE=infinity
	I0919 16:55:07.944987   85253 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0919 16:55:07.944995   85253 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0919 16:55:07.944999   85253 command_runner.go:130] > TasksMax=infinity
	I0919 16:55:07.945005   85253 command_runner.go:130] > TimeoutStartSec=0
	I0919 16:55:07.945011   85253 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0919 16:55:07.945017   85253 command_runner.go:130] > Delegate=yes
	I0919 16:55:07.945024   85253 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0919 16:55:07.945034   85253 command_runner.go:130] > KillMode=process
	I0919 16:55:07.945038   85253 command_runner.go:130] > [Install]
	I0919 16:55:07.945042   85253 command_runner.go:130] > WantedBy=multi-user.target
	I0919 16:55:07.945402   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:55:07.961195   85253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 16:55:07.982944   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 16:55:07.995146   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:55:08.006161   85253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 16:55:08.038776   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 16:55:08.051734   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 16:55:08.068960   85253 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0919 16:55:08.069046   85253 ssh_runner.go:195] Run: which cri-dockerd
	I0919 16:55:08.072702   85253 command_runner.go:130] > /usr/bin/cri-dockerd
	I0919 16:55:08.072990   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 16:55:08.081489   85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 16:55:08.099293   85253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 16:55:08.212384   85253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 16:55:08.322604   85253 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 16:55:08.322652   85253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 16:55:08.341858   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:55:08.445976   85253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 16:55:09.852095   85253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.406075661s)
	I0919 16:55:09.852167   85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:55:09.953668   85253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 16:55:10.053750   85253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 16:55:10.170136   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:55:10.293259   85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 16:55:10.309538   85253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 16:55:10.428884   85253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 16:55:10.512855   85253 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 16:55:10.512943   85253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 16:55:10.518494   85253 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0919 16:55:10.518516   85253 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0919 16:55:10.518523   85253 command_runner.go:130] > Device: 16h/22d	Inode: 880         Links: 1
	I0919 16:55:10.518530   85253 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0919 16:55:10.518535   85253 command_runner.go:130] > Access: 2023-09-19 16:55:10.432062468 +0000
	I0919 16:55:10.518540   85253 command_runner.go:130] > Modify: 2023-09-19 16:55:10.432062468 +0000
	I0919 16:55:10.518544   85253 command_runner.go:130] > Change: 2023-09-19 16:55:10.435065926 +0000
	I0919 16:55:10.518548   85253 command_runner.go:130] >  Birth: -
	I0919 16:55:10.518864   85253 start.go:537] Will wait 60s for crictl version
	I0919 16:55:10.518923   85253 ssh_runner.go:195] Run: which crictl
	I0919 16:55:10.523255   85253 command_runner.go:130] > /usr/bin/crictl
	I0919 16:55:10.523321   85253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 16:55:10.580320   85253 command_runner.go:130] > Version:  0.1.0
	I0919 16:55:10.580350   85253 command_runner.go:130] > RuntimeName:  docker
	I0919 16:55:10.580459   85253 command_runner.go:130] > RuntimeVersion:  24.0.6
	I0919 16:55:10.580480   85253 command_runner.go:130] > RuntimeApiVersion:  v1
	I0919 16:55:10.582456   85253 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 16:55:10.582536   85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 16:55:10.608625   85253 command_runner.go:130] > 24.0.6
	I0919 16:55:10.608742   85253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 16:55:10.632834   85253 command_runner.go:130] > 24.0.6
	I0919 16:55:10.636360   85253 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 16:55:10.637795   85253 out.go:177]   - env NO_PROXY=192.168.50.11
	I0919 16:55:10.639243   85253 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
	I0919 16:55:10.642029   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:10.642431   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:55:10.642462   85253 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:55:10.642670   85253 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0919 16:55:10.646718   85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:55:10.659147   85253 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589 for IP: 192.168.50.170
	I0919 16:55:10.659173   85253 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:55:10.659326   85253 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 16:55:10.659364   85253 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 16:55:10.659377   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 16:55:10.659390   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 16:55:10.659406   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 16:55:10.659423   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 16:55:10.659493   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 16:55:10.659550   85253 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 16:55:10.659573   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 16:55:10.659613   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 16:55:10.659637   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 16:55:10.659661   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 16:55:10.659701   85253 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 16:55:10.659730   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> /usr/share/ca-certificates/733972.pem
	I0919 16:55:10.659743   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:55:10.659755   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem -> /usr/share/ca-certificates/73397.pem
	I0919 16:55:10.660078   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 16:55:10.683241   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 16:55:10.705256   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 16:55:10.727098   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 16:55:10.749240   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 16:55:10.771451   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 16:55:10.793430   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 16:55:10.815000   85253 ssh_runner.go:195] Run: openssl version
	I0919 16:55:10.820161   85253 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0919 16:55:10.820534   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 16:55:10.830637   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 16:55:10.835200   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 16:55:10.835233   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 16:55:10.835271   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 16:55:10.840910   85253 command_runner.go:130] > 3ec20f2e
	I0919 16:55:10.840965   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 16:55:10.850956   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 16:55:10.861189   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:55:10.865423   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:55:10.865448   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:55:10.865536   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 16:55:10.870642   85253 command_runner.go:130] > b5213941
	I0919 16:55:10.870991   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 16:55:10.881324   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 16:55:10.891328   85253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 16:55:10.895658   85253 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 16:55:10.895844   85253 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 16:55:10.895905   85253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 16:55:10.901023   85253 command_runner.go:130] > 51391683
	I0919 16:55:10.901152   85253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 16:55:10.911034   85253 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 16:55:10.914906   85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:55:10.915020   85253 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 16:55:10.915110   85253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 16:55:10.947075   85253 command_runner.go:130] > cgroupfs
	I0919 16:55:10.947160   85253 cni.go:84] Creating CNI manager for ""
	I0919 16:55:10.947178   85253 cni.go:136] 2 nodes found, recommending kindnet
	I0919 16:55:10.947199   85253 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 16:55:10.947228   85253 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.170 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-415589 NodeName:multinode-415589-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 16:55:10.947359   85253 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-415589-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 16:55:10.947445   85253 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-415589-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 16:55:10.947518   85253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 16:55:10.958393   85253 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	I0919 16:55:10.958441   85253 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	
	Initiating transfer...
	I0919 16:55:10.958497   85253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
	I0919 16:55:10.968039   85253 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
	I0919 16:55:10.968050   85253 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm
	I0919 16:55:10.968055   85253 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet
	I0919 16:55:10.968066   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
	I0919 16:55:10.968137   85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
	I0919 16:55:10.972967   85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I0919 16:55:10.973002   85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I0919 16:55:10.973020   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
	I0919 16:55:18.859012   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
	I0919 16:55:18.859096   85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
	I0919 16:55:18.864141   85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I0919 16:55:18.864192   85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I0919 16:55:18.864217   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
	I0919 16:55:19.885737   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:55:19.901648   85253 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
	I0919 16:55:19.901759   85253 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
	I0919 16:55:19.905958   85253 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I0919 16:55:19.905997   85253 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I0919 16:55:19.906028   85253 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
	I0919 16:55:20.423935   85253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0919 16:55:20.433123   85253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0919 16:55:20.448768   85253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 16:55:20.464104   85253 ssh_runner.go:195] Run: grep 192.168.50.11	control-plane.minikube.internal$ /etc/hosts
	I0919 16:55:20.467681   85253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 16:55:20.479501   85253 host.go:66] Checking if "multinode-415589" exists ...
	I0919 16:55:20.479768   85253 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:55:20.479981   85253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:55:20.480039   85253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:55:20.494283   85253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I0919 16:55:20.494711   85253 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:55:20.495164   85253 main.go:141] libmachine: Using API Version  1
	I0919 16:55:20.495212   85253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:55:20.495514   85253 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:55:20.495727   85253 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:55:20.495852   85253 start.go:304] JoinCluster: &{Name:multinode-415589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-415589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:55:20.495979   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 16:55:20.495998   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:55:20.499279   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:55:20.499710   85253 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:55:20.499741   85253 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:55:20.499859   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:55:20.500070   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:55:20.500246   85253 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:55:20.500397   85253 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:55:20.681527   85253 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 
	I0919 16:55:20.681823   85253 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0919 16:55:20.681871   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-415589-m02"
	I0919 16:55:20.723585   85253 command_runner.go:130] > [preflight] Running pre-flight checks
	I0919 16:55:20.888718   85253 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0919 16:55:20.888750   85253 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0919 16:55:20.927308   85253 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 16:55:20.927341   85253 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 16:55:20.927350   85253 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0919 16:55:21.049684   85253 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0919 16:55:23.092606   85253 command_runner.go:130] > This node has joined the cluster:
	I0919 16:55:23.092638   85253 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0919 16:55:23.092650   85253 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0919 16:55:23.092660   85253 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0919 16:55:23.094423   85253 command_runner.go:130] ! W0919 16:55:20.719408    1164 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0919 16:55:23.094452   85253 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 16:55:23.094525   85253 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lvxs0o.g54z5vfgz74yr442 --discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-415589-m02": (2.412618672s)
	I0919 16:55:23.094564   85253 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 16:55:23.319102   85253 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0919 16:55:23.319150   85253 start.go:306] JoinCluster complete in 2.823297884s
	I0919 16:55:23.319166   85253 cni.go:84] Creating CNI manager for ""
	I0919 16:55:23.319183   85253 cni.go:136] 2 nodes found, recommending kindnet
	I0919 16:55:23.319248   85253 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 16:55:23.324833   85253 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0919 16:55:23.324850   85253 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0919 16:55:23.324857   85253 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0919 16:55:23.324863   85253 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0919 16:55:23.324869   85253 command_runner.go:130] > Access: 2023-09-19 16:53:37.309210321 +0000
	I0919 16:55:23.324874   85253 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I0919 16:55:23.324882   85253 command_runner.go:130] > Change: 2023-09-19 16:53:35.557210321 +0000
	I0919 16:55:23.324888   85253 command_runner.go:130] >  Birth: -
	I0919 16:55:23.325228   85253 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0919 16:55:23.325243   85253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0919 16:55:23.342847   85253 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 16:55:23.645641   85253 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0919 16:55:23.649658   85253 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0919 16:55:23.652505   85253 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0919 16:55:23.664138   85253 command_runner.go:130] > daemonset.apps/kindnet configured
	I0919 16:55:23.667036   85253 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:55:23.667284   85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:55:23.667614   85253 round_trippers.go:463] GET https://192.168.50.11:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0919 16:55:23.667628   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:23.667639   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:23.667648   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:23.669483   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:23.669498   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:23.669505   85253 round_trippers.go:580]     Content-Length: 291
	I0919 16:55:23.669510   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:23 GMT
	I0919 16:55:23.669516   85253 round_trippers.go:580]     Audit-Id: a7e12e49-a619-4239-974e-6f74a31fab43
	I0919 16:55:23.669521   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:23.669528   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:23.669537   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:23.669544   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:23.669600   85253 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"51735e10-f9cc-4bf5-9383-854f680ad544","resourceVersion":"417","creationTimestamp":"2023-09-19T16:54:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0919 16:55:23.669717   85253 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-415589" context rescaled to 1 replicas
	I0919 16:55:23.669749   85253 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.50.170 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0919 16:55:23.672301   85253 out.go:177] * Verifying Kubernetes components...
	I0919 16:55:23.674012   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:55:23.687762   85253 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:55:23.688061   85253 kapi.go:59] client config for multinode-415589: &rest.Config{Host:"https://192.168.50.11:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.crt", KeyFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/profiles/multinode-415589/client.key", CAFile:"/home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf0e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 16:55:23.688313   85253 node_ready.go:35] waiting up to 6m0s for node "multinode-415589-m02" to be "Ready" ...
	I0919 16:55:23.688375   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:23.688382   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:23.688390   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:23.688396   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:23.694935   85253 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 16:55:23.694954   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:23.694962   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:23.694967   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:23.694972   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:23 GMT
	I0919 16:55:23.694977   85253 round_trippers.go:580]     Audit-Id: 91bb95cb-b0fc-4cff-851c-378e69b586cd
	I0919 16:55:23.694983   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:23.694988   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:23.694992   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:23.695417   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:23.695702   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:23.695715   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:23.695726   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:23.695734   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:23.701000   85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 16:55:23.701017   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:23.701024   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:23.701032   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:23.701038   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:23.701043   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:23 GMT
	I0919 16:55:23.701048   85253 round_trippers.go:580]     Audit-Id: c37695e3-053c-450c-b5c4-e474a565f6e3
	I0919 16:55:23.701056   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:23.701063   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:23.701188   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:24.201546   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:24.201569   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:24.201577   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:24.201583   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:24.205054   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:24.205078   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:24.205091   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:24.205101   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:24.205108   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:24.205115   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:24.205123   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:24.205131   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:24 GMT
	I0919 16:55:24.205144   85253 round_trippers.go:580]     Audit-Id: 553ada15-eabd-4f27-8bb5-2cb7cf80744d
	I0919 16:55:24.205191   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:24.701770   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:24.701793   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:24.701801   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:24.701807   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:24.704734   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:24.704759   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:24.704771   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:24 GMT
	I0919 16:55:24.704779   85253 round_trippers.go:580]     Audit-Id: 914d3424-3a8f-4709-aa7e-c334805d8933
	I0919 16:55:24.704784   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:24.704789   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:24.704794   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:24.704799   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:24.704804   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:24.705027   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:25.202202   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:25.202225   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:25.202233   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:25.202240   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:25.205102   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:25.205126   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:25.205135   85253 round_trippers.go:580]     Audit-Id: b58153d5-0210-4178-9bae-c6b41fffb1c7
	I0919 16:55:25.205143   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:25.205150   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:25.205158   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:25.205165   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:25.205173   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:25.205183   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:25 GMT
	I0919 16:55:25.205353   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:25.702600   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:25.702623   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:25.702632   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:25.702638   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:25.705574   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:25.705604   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:25.705630   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:25.705641   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:25.705651   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:25.705660   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:25 GMT
	I0919 16:55:25.705672   85253 round_trippers.go:580]     Audit-Id: 0d3e4335-21aa-44d4-98af-f7ac5363b6ee
	I0919 16:55:25.705681   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:25.705697   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:25.705856   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:25.706142   85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
	I0919 16:55:26.202555   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:26.202580   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:26.202589   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:26.202597   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:26.205487   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:26.205505   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:26.205513   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:26.205518   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:26.205523   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:26 GMT
	I0919 16:55:26.205528   85253 round_trippers.go:580]     Audit-Id: b5cef305-be4e-41f3-b406-55ce5f65d0ea
	I0919 16:55:26.205534   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:26.205544   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:26.205558   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:26.205777   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:26.702435   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:26.702459   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:26.702467   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:26.702474   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:26.705052   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:26.705073   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:26.705080   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:26 GMT
	I0919 16:55:26.705085   85253 round_trippers.go:580]     Audit-Id: 870a34de-2f2b-4b18-baec-d9a4b057af16
	I0919 16:55:26.705090   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:26.705095   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:26.705101   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:26.705110   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:26.705115   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:26.705154   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:27.201782   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:27.201810   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:27.201823   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:27.201833   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:27.206925   85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 16:55:27.206957   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:27.206970   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:27 GMT
	I0919 16:55:27.206982   85253 round_trippers.go:580]     Audit-Id: 4debd3c4-57c9-41ad-89df-f25ae5e392d9
	I0919 16:55:27.206992   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:27.207002   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:27.207017   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:27.207031   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:27.207045   85253 round_trippers.go:580]     Content-Length: 3485
	I0919 16:55:27.207221   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"476","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 2461 chars]
	I0919 16:55:27.701860   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:27.701891   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:27.701905   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:27.701917   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:27.705535   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:27.705561   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:27.705571   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:27.705580   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:27.705587   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:27.705596   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:27.705603   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:27.705660   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:27 GMT
	I0919 16:55:27.705680   85253 round_trippers.go:580]     Audit-Id: 74bff726-eea9-44ef-afbf-c9a8e94a6518
	I0919 16:55:27.705843   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:27.706167   85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
	I0919 16:55:28.202443   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:28.202466   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:28.202475   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:28.202484   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:28.206034   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:28.206051   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:28.206058   85253 round_trippers.go:580]     Audit-Id: cd4347d4-f582-4623-9efb-82e18bed2113
	I0919 16:55:28.206063   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:28.206068   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:28.206073   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:28.206078   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:28.206083   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:28.206089   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:28 GMT
	I0919 16:55:28.206145   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:28.701820   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:28.701857   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:28.701869   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:28.701879   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:28.704940   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:28.704966   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:28.704977   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:28.704985   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:28.704993   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:28 GMT
	I0919 16:55:28.705002   85253 round_trippers.go:580]     Audit-Id: 4c79e7ac-32e0-4d36-be6a-d6e214398f2e
	I0919 16:55:28.705010   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:28.705022   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:28.705030   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:28.705124   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:29.202507   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:29.202539   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:29.202551   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:29.202560   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:29.205144   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:29.205168   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:29.205179   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:29.205188   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:29.205196   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:29.205204   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:29 GMT
	I0919 16:55:29.205217   85253 round_trippers.go:580]     Audit-Id: d2442b67-4b5c-4047-9ccc-4d9bdfe470f9
	I0919 16:55:29.205225   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:29.205244   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:29.205340   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:29.701809   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:29.701832   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:29.701841   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:29.701847   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:29.704600   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:29.704631   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:29.704642   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:29.704652   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:29.704664   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:29.704687   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:29 GMT
	I0919 16:55:29.704701   85253 round_trippers.go:580]     Audit-Id: 619cabf8-0efc-437c-b0c4-6b97d754001c
	I0919 16:55:29.704713   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:29.704725   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:29.704806   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:30.201674   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:30.201697   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:30.201708   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:30.201716   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:30.205070   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:30.205096   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:30.205105   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:30.205113   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:30.205120   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:30.205128   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:30.205138   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:30 GMT
	I0919 16:55:30.205151   85253 round_trippers.go:580]     Audit-Id: 47a55611-c403-4cd7-878e-a97f18395a5b
	I0919 16:55:30.205158   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:30.205275   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:30.205565   85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
	I0919 16:55:30.701581   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:30.701606   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:30.701630   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:30.701640   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:30.704917   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:30.704940   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:30.704954   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:30.704962   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:30.704970   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:30.704978   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:30.704984   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:30 GMT
	I0919 16:55:30.704992   85253 round_trippers.go:580]     Audit-Id: 6a173b36-96dd-446f-b1af-f195a7f9d5ee
	I0919 16:55:30.705001   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:30.705082   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:31.201575   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:31.201601   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:31.201610   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:31.201627   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:31.204728   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:31.204745   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:31.204752   85253 round_trippers.go:580]     Audit-Id: 7bc97be9-7e6e-4bc6-b1b0-05556c1990f2
	I0919 16:55:31.204761   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:31.204767   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:31.204772   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:31.204777   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:31.204782   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:31.204792   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:31 GMT
	I0919 16:55:31.204851   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:31.702519   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:31.702543   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:31.702552   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:31.702558   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:31.705585   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:31.705597   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:31.705603   85253 round_trippers.go:580]     Audit-Id: b8069767-ee39-44b0-8935-e099e18f543b
	I0919 16:55:31.705608   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:31.705632   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:31.705641   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:31.705651   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:31.705663   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:31.705671   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:31 GMT
	I0919 16:55:31.705742   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:32.202360   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:32.202383   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:32.202391   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:32.202397   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:32.205402   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:32.205430   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:32.205441   85253 round_trippers.go:580]     Audit-Id: 880c8325-06a4-4df9-803a-6ee6b237c8fe
	I0919 16:55:32.205450   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:32.205459   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:32.205471   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:32.205482   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:32.205490   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:32.205501   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:32 GMT
	I0919 16:55:32.205598   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:32.205944   85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
	I0919 16:55:32.701824   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:32.701848   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:32.701860   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:32.701868   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:32.705125   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:32.705150   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:32.705167   85253 round_trippers.go:580]     Content-Length: 3594
	I0919 16:55:32.705176   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:32 GMT
	I0919 16:55:32.705188   85253 round_trippers.go:580]     Audit-Id: 70bca672-9e67-41b6-aefa-8bd9c065262c
	I0919 16:55:32.705198   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:32.705208   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:32.705216   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:32.705231   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:32.705346   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"484","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2570 chars]
	I0919 16:55:33.202224   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:33.202249   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:33.202257   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:33.202263   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:33.206515   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:55:33.206537   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:33.206547   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:33 GMT
	I0919 16:55:33.206554   85253 round_trippers.go:580]     Audit-Id: d5bb848b-69da-4afc-8838-0db90c489392
	I0919 16:55:33.206567   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:33.206574   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:33.206583   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:33.206592   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:33.206602   85253 round_trippers.go:580]     Content-Length: 3863
	I0919 16:55:33.206854   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
	I0919 16:55:33.701678   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:33.701699   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:33.701707   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:33.701719   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:33.705206   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:33.705231   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:33.705251   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:33.705259   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:33.705265   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:33.705274   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:33.705287   85253 round_trippers.go:580]     Content-Length: 3863
	I0919 16:55:33.705315   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:33 GMT
	I0919 16:55:33.705326   85253 round_trippers.go:580]     Audit-Id: 3182db7b-4fbd-47bb-9a32-780c008cf00f
	I0919 16:55:33.705417   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
	I0919 16:55:34.201682   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:34.201708   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:34.201716   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:34.201722   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:34.205796   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:55:34.205827   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:34.205838   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:34.205846   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:34.205851   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:34.205858   85253 round_trippers.go:580]     Content-Length: 3863
	I0919 16:55:34.205863   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:34 GMT
	I0919 16:55:34.205871   85253 round_trippers.go:580]     Audit-Id: 95390aab-08bd-4fb1-b1d7-3691517f17a2
	I0919 16:55:34.205879   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:34.206026   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
	I0919 16:55:34.206278   85253 node_ready.go:58] node "multinode-415589-m02" has status "Ready":"False"
	I0919 16:55:34.702549   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:34.702572   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:34.702582   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:34.702588   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:34.705324   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:34.705350   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:34.705360   85253 round_trippers.go:580]     Audit-Id: d35964ff-a80b-4162-bae9-99046d34d339
	I0919 16:55:34.705369   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:34.705382   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:34.705399   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:34.705411   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:34.705422   85253 round_trippers.go:580]     Content-Length: 3863
	I0919 16:55:34.705434   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:34 GMT
	I0919 16:55:34.705538   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
	I0919 16:55:35.202100   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:35.202123   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.202132   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.202137   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.205833   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:35.205848   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.205854   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.205860   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.205866   85253 round_trippers.go:580]     Content-Length: 3863
	I0919 16:55:35.205871   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.205876   85253 round_trippers.go:580]     Audit-Id: d774542d-67bc-47da-8f82-f18b224250a6
	I0919 16:55:35.205881   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.205887   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.205951   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"500","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2839 chars]
	I0919 16:55:35.702144   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:35.702168   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.702177   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.702182   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.706480   85253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 16:55:35.706504   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.706512   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.706517   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.706523   85253 round_trippers.go:580]     Content-Length: 3729
	I0919 16:55:35.706529   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.706534   85253 round_trippers.go:580]     Audit-Id: 0721d636-e7bc-40dc-9010-597bfe183d1f
	I0919 16:55:35.706542   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.706548   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.706619   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"509","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2705 chars]
	I0919 16:55:35.706865   85253 node_ready.go:49] node "multinode-415589-m02" has status "Ready":"True"
	I0919 16:55:35.706879   85253 node_ready.go:38] duration metric: took 12.01855268s waiting for node "multinode-415589-m02" to be "Ready" ...
	I0919 16:55:35.706891   85253 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:55:35.706946   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods
	I0919 16:55:35.706954   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.706961   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.706966   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.712976   85253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 16:55:35.712999   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.713009   85253 round_trippers.go:580]     Audit-Id: eb85a93e-ad5e-4343-b018-623ce9a1e5b4
	I0919 16:55:35.713015   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.713020   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.713025   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.713030   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.713035   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.721118   85253 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67482 chars]
	I0919 16:55:35.723180   85253 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.723255   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-ctsv5
	I0919 16:55:35.723263   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.723270   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.723276   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.726205   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:35.726223   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.726233   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.726240   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.726251   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.726260   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.726265   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.726270   85253 round_trippers.go:580]     Audit-Id: 1ca77441-e6bb-4d84-ae80-c1964628ed16
	I0919 16:55:35.726963   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-ctsv5","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d4fcd880-e2ad-4d44-a070-e2af114e5e38","resourceVersion":"413","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c6b79652-1294-4fc4-9085-191712db297a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6b79652-1294-4fc4-9085-191712db297a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0919 16:55:35.727353   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:35.727368   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.727374   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.727380   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.731047   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:35.731061   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.731067   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.731073   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.731081   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.731090   85253 round_trippers.go:580]     Audit-Id: 4355d6a0-01aa-4f79-b54a-fc7b054d228c
	I0919 16:55:35.731102   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.731116   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.731197   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:35.731458   85253 pod_ready.go:92] pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:35.731470   85253 pod_ready.go:81] duration metric: took 8.270533ms waiting for pod "coredns-5dd5756b68-ctsv5" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.731477   85253 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.731520   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-415589
	I0919 16:55:35.731528   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.731534   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.731540   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.733436   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:35.733455   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.733463   85253 round_trippers.go:580]     Audit-Id: 53bc64f4-ab9d-4976-b10a-7446df67b0a3
	I0919 16:55:35.733471   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.733478   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.733489   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.733496   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.733508   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.734404   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-415589","namespace":"kube-system","uid":"1dbf3be3-1373-453b-a745-575b7f604586","resourceVersion":"383","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.11:2379","kubernetes.io/config.hash":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.mirror":"6df6017a63b31f0e4794b474c009f352","kubernetes.io/config.seen":"2023-09-19T16:54:11.230739231Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0919 16:55:35.734838   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:35.734852   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.734859   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.734865   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.736677   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:35.736690   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.736696   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.736701   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.736706   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.736714   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.736725   85253 round_trippers.go:580]     Audit-Id: 6e8160b3-094d-4760-858f-4ab6c86ef72b
	I0919 16:55:35.736730   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.736867   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:35.737223   85253 pod_ready.go:92] pod "etcd-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:35.737241   85253 pod_ready.go:81] duration metric: took 5.758956ms waiting for pod "etcd-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.737253   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.737301   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-415589
	I0919 16:55:35.737308   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.737315   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.737321   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.739057   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:35.739071   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.739076   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.739082   85253 round_trippers.go:580]     Audit-Id: d807ee33-902c-4e1e-993d-07a3e1463870
	I0919 16:55:35.739087   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.739103   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.739115   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.739123   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.739287   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-415589","namespace":"kube-system","uid":"4ecf615e-9f92-46f8-8b34-9de418bca0ac","resourceVersion":"384","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.11:8443","kubernetes.io/config.hash":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.mirror":"de462c90cfa089272f7e7f2885319010","kubernetes.io/config.seen":"2023-09-19T16:54:11.230732561Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0919 16:55:35.739724   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:35.739737   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.739750   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.739767   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.741561   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:35.741581   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.741590   85253 round_trippers.go:580]     Audit-Id: 712bc466-fd71-4c4d-b5ee-1fb3befb699f
	I0919 16:55:35.741598   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.741605   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.741627   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.741640   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.741647   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.741854   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:35.742136   85253 pod_ready.go:92] pod "kube-apiserver-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:35.742150   85253 pod_ready.go:81] duration metric: took 4.886937ms waiting for pod "kube-apiserver-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.742160   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.742206   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-415589
	I0919 16:55:35.742215   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.742226   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.742234   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.744044   85253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0919 16:55:35.744063   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.744072   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.744079   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.744088   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.744096   85253 round_trippers.go:580]     Audit-Id: 65f50667-96fc-49ce-ae21-868d02a7f1fd
	I0919 16:55:35.744105   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.744116   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.744301   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-415589","namespace":"kube-system","uid":"3b76511f-a4ea-484d-a0f7-6968c3abf350","resourceVersion":"385","creationTimestamp":"2023-09-19T16:54:09Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.mirror":"504acb37dbf2142427850f2e779b05ad","kubernetes.io/config.seen":"2023-09-19T16:54:02.792831460Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0919 16:55:35.744623   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:35.744634   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.744640   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.744646   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.746763   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:35.746780   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.746788   85253 round_trippers.go:580]     Audit-Id: f5322e70-d174-4867-af35-9f447ec402d7
	I0919 16:55:35.746796   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.746807   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.746822   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.746835   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.746840   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.747018   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:35.747284   85253 pod_ready.go:92] pod "kube-controller-manager-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:35.747298   85253 pod_ready.go:81] duration metric: took 5.131834ms waiting for pod "kube-controller-manager-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.747307   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hxjql" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:35.902711   85253 request.go:629] Waited for 155.344797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxjql
	I0919 16:55:35.902803   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hxjql
	I0919 16:55:35.902815   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:35.902825   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:35.902832   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:35.906200   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:35.906224   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:35.906234   85253 round_trippers.go:580]     Audit-Id: 9d6da219-1207-4844-baa8-86ae307f47b6
	I0919 16:55:35.906241   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:35.906249   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:35.906255   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:35.906261   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:35.906266   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:35 GMT
	I0919 16:55:35.906801   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hxjql","generateName":"kube-proxy-","namespace":"kube-system","uid":"6cebe5c5-4e29-4835-84b9-057c096c799a","resourceVersion":"495","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I0919 16:55:36.102730   85253 request.go:629] Waited for 195.394934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:36.102818   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589-m02
	I0919 16:55:36.102831   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:36.102843   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:36.102858   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:36.106043   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:36.106070   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:36.106081   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:36.106090   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:36.106098   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:36.106107   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:36.106114   85253 round_trippers.go:580]     Content-Length: 3729
	I0919 16:55:36.106126   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:36 GMT
	I0919 16:55:36.106133   85253 round_trippers.go:580]     Audit-Id: 77c620bc-2642-4da5-869f-56d2927a88cf
	I0919 16:55:36.106248   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589-m02","uid":"c01c961f-7fab-4b50-a0a5-ab3976632c19","resourceVersion":"509","creationTimestamp":"2023-09-19T16:55:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:55:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 2705 chars]
	I0919 16:55:36.106579   85253 pod_ready.go:92] pod "kube-proxy-hxjql" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:36.106605   85253 pod_ready.go:81] duration metric: took 359.291297ms waiting for pod "kube-proxy-hxjql" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:36.106620   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:36.303051   85253 request.go:629] Waited for 196.339109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
	I0919 16:55:36.303115   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6jtp
	I0919 16:55:36.303120   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:36.303128   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:36.303134   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:36.306061   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:36.306086   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:36.306094   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:36.306100   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:36.306108   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:36.306116   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:36.306129   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:36 GMT
	I0919 16:55:36.306141   85253 round_trippers.go:580]     Audit-Id: 58bd7e8f-1052-4456-8883-18d8c69f9483
	I0919 16:55:36.306456   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6jtp","generateName":"kube-proxy-","namespace":"kube-system","uid":"a1f6a8f6-f608-4f79-9fd4-1a570bde14a6","resourceVersion":"376","creationTimestamp":"2023-09-19T16:54:23Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5f6891df-57ac-4a88-9703-82c35d43e2eb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5f6891df-57ac-4a88-9703-82c35d43e2eb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0919 16:55:36.502332   85253 request.go:629] Waited for 195.321555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:36.502394   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:36.502399   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:36.502406   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:36.502412   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:36.505333   85253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 16:55:36.505356   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:36.505365   85253 round_trippers.go:580]     Audit-Id: b7938f84-6d9b-4604-adde-0c04bd478166
	I0919 16:55:36.505373   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:36.505382   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:36.505389   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:36.505397   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:36.505407   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:36 GMT
	I0919 16:55:36.505499   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:36.505836   85253 pod_ready.go:92] pod "kube-proxy-r6jtp" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:36.505853   85253 pod_ready.go:81] duration metric: took 399.224616ms waiting for pod "kube-proxy-r6jtp" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:36.505866   85253 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:36.702294   85253 request.go:629] Waited for 196.330343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
	I0919 16:55:36.702359   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-415589
	I0919 16:55:36.702364   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:36.702373   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:36.702400   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:36.705509   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:36.705533   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:36.705544   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:36 GMT
	I0919 16:55:36.705553   85253 round_trippers.go:580]     Audit-Id: e386c998-e4de-4a1a-8788-153743d96eb5
	I0919 16:55:36.705561   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:36.705569   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:36.705581   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:36.705592   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:36.706362   85253 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-415589","namespace":"kube-system","uid":"6f43b8d1-3b77-4df6-8b66-7d08cf7c0682","resourceVersion":"362","creationTimestamp":"2023-09-19T16:54:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.mirror":"8d76d9bf6a9e2f131bdda3e4a41d04bb","kubernetes.io/config.seen":"2023-09-19T16:54:11.230737337Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-19T16:54:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0919 16:55:36.902267   85253 request.go:629] Waited for 194.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:36.902326   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes/multinode-415589
	I0919 16:55:36.902331   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:36.902339   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:36.902345   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:36.905395   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:36.905421   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:36.905430   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:36.905437   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:36.905445   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:36.905454   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:36 GMT
	I0919 16:55:36.905467   85253 round_trippers.go:580]     Audit-Id: e3f02859-9ef9-4cb2-8510-41ddc2f0d479
	I0919 16:55:36.905474   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:36.905932   85253 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-09-19T16:54:07Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0919 16:55:36.906275   85253 pod_ready.go:92] pod "kube-scheduler-multinode-415589" in "kube-system" namespace has status "Ready":"True"
	I0919 16:55:36.906292   85253 pod_ready.go:81] duration metric: took 400.416963ms waiting for pod "kube-scheduler-multinode-415589" in "kube-system" namespace to be "Ready" ...
	I0919 16:55:36.906302   85253 pod_ready.go:38] duration metric: took 1.199397384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 16:55:36.906322   85253 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 16:55:36.906379   85253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:55:36.919851   85253 system_svc.go:56] duration metric: took 13.515461ms WaitForService to wait for kubelet.
	I0919 16:55:36.919881   85253 kubeadm.go:581] duration metric: took 13.250094673s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 16:55:36.919910   85253 node_conditions.go:102] verifying NodePressure condition ...
	I0919 16:55:37.102318   85253 request.go:629] Waited for 182.31861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.50.11:8443/api/v1/nodes
	I0919 16:55:37.102379   85253 round_trippers.go:463] GET https://192.168.50.11:8443/api/v1/nodes
	I0919 16:55:37.102395   85253 round_trippers.go:469] Request Headers:
	I0919 16:55:37.102406   85253 round_trippers.go:473]     Accept: application/json, */*
	I0919 16:55:37.102413   85253 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0919 16:55:37.105565   85253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 16:55:37.105590   85253 round_trippers.go:577] Response Headers:
	I0919 16:55:37.105598   85253 round_trippers.go:580]     Content-Type: application/json
	I0919 16:55:37.105606   85253 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 61c5b231-131c-4fd0-91c3-31811bbae13b
	I0919 16:55:37.105626   85253 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5929edde-3b1b-4a46-8e1b-99197a385522
	I0919 16:55:37.105636   85253 round_trippers.go:580]     Date: Tue, 19 Sep 2023 16:55:37 GMT
	I0919 16:55:37.105645   85253 round_trippers.go:580]     Audit-Id: dc355d8f-ee54-4443-8346-47a98e5197bc
	I0919 16:55:37.105652   85253 round_trippers.go:580]     Cache-Control: no-cache, private
	I0919 16:55:37.106133   85253 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"multinode-415589","uid":"fd31b5e1-d596-44c1-b0ff-916583a8513d","resourceVersion":"423","creationTimestamp":"2023-09-19T16:54:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-415589","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d69d3d50d3fb420e04057e6545e9fd90e260986","minikube.k8s.io/name":"multinode-415589","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_19T16_54_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8708 chars]
	I0919 16:55:37.106637   85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 16:55:37.106661   85253 node_conditions.go:123] node cpu capacity is 2
	I0919 16:55:37.106674   85253 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 16:55:37.106680   85253 node_conditions.go:123] node cpu capacity is 2
	I0919 16:55:37.106686   85253 node_conditions.go:105] duration metric: took 186.767088ms to run NodePressure ...
	I0919 16:55:37.106699   85253 start.go:228] waiting for startup goroutines ...
	I0919 16:55:37.106728   85253 start.go:242] writing updated cluster config ...
	I0919 16:55:37.107027   85253 ssh_runner.go:195] Run: rm -f paused
	I0919 16:55:37.158485   85253 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 16:55:37.162156   85253 out.go:177] * Done! kubectl is now configured to use "multinode-415589" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 16:53:36 UTC, ends at Tue 2023-09-19 16:56:59 UTC. --
	Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.489451148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498672722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498748720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498773757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:54:36 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:36.498789870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:36 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:54:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a48a984b726602555fe6103a682cf7c01cbdc4cfc063e347b37e7b664cd0efd9/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 16:54:37 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:54:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dc73b0c19acc1823f938bdac00e9aef48901d30a9938252e5bfa445f3b60ab4/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117015304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117071075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117097252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.117108566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233644781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233828763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.233990405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:54:37 multinode-415589 dockerd[1130]: time="2023-09-19T16:54:37.234051666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.380980760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381116014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381144449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:55:38 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:38.381156429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:55:38 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9baecebc6dd1099654601979e6cbbfaa20f3e668e516fe2af70cd5d43fe75ab4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 19 16:55:40 multinode-415589 cri-dockerd[1012]: time="2023-09-19T16:55:40Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114754708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114972339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.114997712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 16:55:40 multinode-415589 dockerd[1130]: time="2023-09-19T16:55:40.115089196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d7a3e4d244557       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9baecebc6dd10       busybox-5bc68d56bd-rkqh6
	b87361f4b0e67       ead0a4a53df89                                                                                         2 minutes ago        Running             coredns                   0                   4dc73b0c19acc       coredns-5dd5756b68-ctsv5
	330b2b5636032       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   a48a984b72660       storage-provisioner
	8fcfd36bfc2b0       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              2 minutes ago        Running             kindnet-cni               0                   84dcc3c9931a1       kindnet-w9q5z
	7dafb88f7c1fc       c120fed2beb84                                                                                         2 minutes ago        Running             kube-proxy                0                   fb0b0cb556e8e       kube-proxy-r6jtp
	1979af9a7d9b7       73deb9a3f7025                                                                                         2 minutes ago        Running             etcd                      0                   2a9a021fe9dc3       etcd-multinode-415589
	bfef71d52559a       7a5d9d67a13f6                                                                                         2 minutes ago        Running             kube-scheduler            0                   6a7de8b20db05       kube-scheduler-multinode-415589
	ff647b080408d       cdcab12b2dd16                                                                                         2 minutes ago        Running             kube-apiserver            0                   24b04414fbb49       kube-apiserver-multinode-415589
	54fbef2163632       55f13c92defb1                                                                                         2 minutes ago        Running             kube-controller-manager   0                   06ff8d69d511e       kube-controller-manager-multinode-415589
	
	* 
	* ==> coredns [b87361f4b0e6] <==
	* [INFO] 10.244.1.2:50553 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202366s
	[INFO] 10.244.0.3:47362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099929s
	[INFO] 10.244.0.3:54185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001856783s
	[INFO] 10.244.0.3:55758 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164664s
	[INFO] 10.244.0.3:35778 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065438s
	[INFO] 10.244.0.3:51650 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00129839s
	[INFO] 10.244.0.3:33357 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058673s
	[INFO] 10.244.0.3:50578 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092136s
	[INFO] 10.244.0.3:57002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074049s
	[INFO] 10.244.1.2:47019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181883s
	[INFO] 10.244.1.2:34149 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167069s
	[INFO] 10.244.1.2:47304 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102053s
	[INFO] 10.244.1.2:35347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102856s
	[INFO] 10.244.0.3:39095 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086025s
	[INFO] 10.244.0.3:49675 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006068s
	[INFO] 10.244.0.3:44686 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000037939s
	[INFO] 10.244.0.3:53348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000041986s
	[INFO] 10.244.1.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165902s
	[INFO] 10.244.1.2:35220 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227022s
	[INFO] 10.244.1.2:37672 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196002s
	[INFO] 10.244.1.2:52969 - 5 "PTR IN 1.50.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000204714s
	[INFO] 10.244.0.3:53988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000077772s
	[INFO] 10.244.0.3:40409 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000041061s
	[INFO] 10.244.0.3:42980 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000039389s
	[INFO] 10.244.0.3:37395 - 5 "PTR IN 1.50.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000038778s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-415589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-415589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=multinode-415589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T16_54_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-415589
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:56:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:55:43 +0000   Tue, 19 Sep 2023 16:54:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:55:43 +0000   Tue, 19 Sep 2023 16:54:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:55:43 +0000   Tue, 19 Sep 2023 16:54:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:55:43 +0000   Tue, 19 Sep 2023 16:54:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.11
	  Hostname:    multinode-415589
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 4445f8c83eec4871b37dec36f475360f
	  System UUID:                4445f8c8-3eec-4871-b37d-ec36f475360f
	  Boot ID:                    b0f45def-c91d-4dd8-b760-0f78f7732ba8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rkqh6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 coredns-5dd5756b68-ctsv5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m36s
	  kube-system                 etcd-multinode-415589                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m48s
	  kube-system                 kindnet-w9q5z                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m36s
	  kube-system                 kube-apiserver-multinode-415589             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-controller-manager-multinode-415589    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-proxy-r6jtp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-scheduler-multinode-415589             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m35s  kube-proxy       
	  Normal  Starting                 2m48s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m48s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m48s  kubelet          Node multinode-415589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s  kubelet          Node multinode-415589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s  kubelet          Node multinode-415589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m37s  node-controller  Node multinode-415589 event: Registered Node multinode-415589 in Controller
	  Normal  NodeReady                2m23s  kubelet          Node multinode-415589 status is now: NodeReady
	
	
	Name:               multinode-415589-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-415589-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-415589-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:56:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:55:53 +0000   Tue, 19 Sep 2023 16:55:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:55:53 +0000   Tue, 19 Sep 2023 16:55:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:55:53 +0000   Tue, 19 Sep 2023 16:55:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:55:53 +0000   Tue, 19 Sep 2023 16:55:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.170
	  Hostname:    multinode-415589-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 ccab75db92294aacb66f42c440b2dfdf
	  System UUID:                ccab75db-9229-4aac-b66f-42c440b2dfdf
	  Boot ID:                    d70ba22e-70e0-4ddc-8b47-7016087dc451
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9qfss    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kindnet-64m2w               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      96s
	  kube-system                 kube-proxy-hxjql            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 89s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s (x5 over 98s)  kubelet          Node multinode-415589-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x5 over 98s)  kubelet          Node multinode-415589-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x5 over 98s)  kubelet          Node multinode-415589-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           92s                node-controller  Node multinode-415589-m02 event: Registered Node multinode-415589-m02 in Controller
	  Normal  NodeReady                84s                kubelet          Node multinode-415589-m02 status is now: NodeReady
	
	
	Name:               multinode-415589-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-415589-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 16:56:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-415589-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 16:56:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 16:56:25 +0000   Tue, 19 Sep 2023 16:56:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 16:56:25 +0000   Tue, 19 Sep 2023 16:56:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 16:56:25 +0000   Tue, 19 Sep 2023 16:56:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 16:56:25 +0000   Tue, 19 Sep 2023 16:56:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.209
	  Hostname:    multinode-415589-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4230c5cb77943d2a1409cdd61aeb739
	  System UUID:                a4230c5c-b779-43d2-a140-9cdd61aeb739
	  Boot ID:                    7f2e2251-cd63-4b54-b7d7-e5c85ad80c9a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pmpvh       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      46s
	  kube-system                 kube-proxy-p8gzq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x2 over 47s)  kubelet          Node multinode-415589-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x2 over 47s)  kubelet          Node multinode-415589-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x2 over 47s)  kubelet          Node multinode-415589-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                node-controller  Node multinode-415589-m03 event: Registered Node multinode-415589-m03 in Controller
	  Normal  NodeReady                34s                kubelet          Node multinode-415589-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.072804] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.333698] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.377013] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141407] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.049597] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.970257] systemd-fstab-generator[547]: Ignoring "noauto" for root device
	[  +0.093337] systemd-fstab-generator[558]: Ignoring "noauto" for root device
	[  +1.132994] systemd-fstab-generator[736]: Ignoring "noauto" for root device
	[  +0.274793] systemd-fstab-generator[774]: Ignoring "noauto" for root device
	[  +0.107916] systemd-fstab-generator[785]: Ignoring "noauto" for root device
	[  +0.121952] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +1.506445] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.106976] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.106530] systemd-fstab-generator[979]: Ignoring "noauto" for root device
	[  +0.125600] systemd-fstab-generator[990]: Ignoring "noauto" for root device
	[  +0.130201] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +4.466467] systemd-fstab-generator[1115]: Ignoring "noauto" for root device
	[  +3.001624] kauditd_printk_skb: 53 callbacks suppressed
	[Sep19 16:54] systemd-fstab-generator[1497]: Ignoring "noauto" for root device
	[  +8.756901] systemd-fstab-generator[2437]: Ignoring "noauto" for root device
	[ +13.802980] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.151151] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [1979af9a7d9b] <==
	* {"level":"info","ts":"2023-09-19T16:54:05.260334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b switched to configuration voters=(16493399244793407243)"}
	{"level":"info","ts":"2023-09-19T16:54:05.260944Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c22b887c03da3da3","local-member-id":"e4e4533a349b670b","added-peer-id":"e4e4533a349b670b","added-peer-peer-urls":["https://192.168.50.11:2380"]}
	{"level":"info","ts":"2023-09-19T16:54:06.0075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T16:54:06.007832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T16:54:06.007995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b received MsgPreVoteResp from e4e4533a349b670b at term 1"}
	{"level":"info","ts":"2023-09-19T16:54:06.008189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T16:54:06.008399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b received MsgVoteResp from e4e4533a349b670b at term 2"}
	{"level":"info","ts":"2023-09-19T16:54:06.008591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e4e4533a349b670b became leader at term 2"}
	{"level":"info","ts":"2023-09-19T16:54:06.008756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e4e4533a349b670b elected leader e4e4533a349b670b at term 2"}
	{"level":"info","ts":"2023-09-19T16:54:06.010749Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e4e4533a349b670b","local-member-attributes":"{Name:multinode-415589 ClientURLs:[https://192.168.50.11:2379]}","request-path":"/0/members/e4e4533a349b670b/attributes","cluster-id":"c22b887c03da3da3","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T16:54:06.010912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:54:06.011337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T16:54:06.0122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.11:2379"}
	{"level":"info","ts":"2023-09-19T16:54:06.012364Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:54:06.01275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T16:54:06.027557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c22b887c03da3da3","local-member-id":"e4e4533a349b670b","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:54:06.027723Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:54:06.027748Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T16:54:06.012788Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T16:54:06.027764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T16:56:13.029749Z","caller":"traceutil/trace.go:171","msg":"trace[469114868] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"217.198508ms","start":"2023-09-19T16:56:12.812527Z","end":"2023-09-19T16:56:13.029726Z","steps":["trace[469114868] 'process raft request'  (duration: 173.968595ms)","trace[469114868] 'compare'  (duration: 42.963963ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-19T16:56:13.029663Z","caller":"traceutil/trace.go:171","msg":"trace[1378110883] linearizableReadLoop","detail":"{readStateIndex:624; appliedIndex:622; }","duration":"164.828804ms","start":"2023-09-19T16:56:12.86479Z","end":"2023-09-19T16:56:13.029619Z","steps":["trace[1378110883] 'read index received'  (duration: 121.714313ms)","trace[1378110883] 'applied index is now lower than readState.Index'  (duration: 43.113941ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-19T16:56:13.030308Z","caller":"traceutil/trace.go:171","msg":"trace[564010888] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"165.315464ms","start":"2023-09-19T16:56:12.864716Z","end":"2023-09-19T16:56:13.030032Z","steps":["trace[564010888] 'process raft request'  (duration: 164.861317ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-19T16:56:13.030888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.027914ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-415589-m03\" ","response":"range_response_count:1 size:1878"}
	{"level":"info","ts":"2023-09-19T16:56:13.030986Z","caller":"traceutil/trace.go:171","msg":"trace[188424040] range","detail":"{range_begin:/registry/minions/multinode-415589-m03; range_end:; response_count:1; response_revision:586; }","duration":"166.195666ms","start":"2023-09-19T16:56:12.864778Z","end":"2023-09-19T16:56:13.030973Z","steps":["trace[188424040] 'agreement among raft nodes before linearized reading'  (duration: 165.034798ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  16:56:59 up 3 min,  0 users,  load average: 0.33, 0.28, 0.11
	Linux multinode-415589 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [8fcfd36bfc2b] <==
	* I0919 16:56:21.848932       1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
	I0919 16:56:21.849121       1 main.go:227] handling current node
	I0919 16:56:21.849148       1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
	I0919 16:56:21.849438       1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24] 
	I0919 16:56:21.849781       1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
	I0919 16:56:21.849883       1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24] 
	I0919 16:56:21.850336       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.50.209 Flags: [] Table: 0} 
	I0919 16:56:31.857112       1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
	I0919 16:56:31.857149       1 main.go:227] handling current node
	I0919 16:56:31.857171       1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
	I0919 16:56:31.857178       1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24] 
	I0919 16:56:31.857480       1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
	I0919 16:56:31.857496       1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24] 
	I0919 16:56:41.872428       1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
	I0919 16:56:41.872568       1 main.go:227] handling current node
	I0919 16:56:41.872597       1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
	I0919 16:56:41.872766       1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24] 
	I0919 16:56:41.872995       1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
	I0919 16:56:41.873180       1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24] 
	I0919 16:56:51.880337       1 main.go:223] Handling node with IPs: map[192.168.50.11:{}]
	I0919 16:56:51.880394       1 main.go:227] handling current node
	I0919 16:56:51.880421       1 main.go:223] Handling node with IPs: map[192.168.50.170:{}]
	I0919 16:56:51.880429       1 main.go:250] Node multinode-415589-m02 has CIDR [10.244.1.0/24] 
	I0919 16:56:51.880598       1 main.go:223] Handling node with IPs: map[192.168.50.209:{}]
	I0919 16:56:51.880643       1 main.go:250] Node multinode-415589-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [ff647b080408] <==
	* I0919 16:54:07.640057       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 16:54:07.643846       1 shared_informer.go:318] Caches are synced for configmaps
	I0919 16:54:07.644018       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 16:54:07.644676       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 16:54:07.644714       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 16:54:07.813553       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 16:54:08.435467       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 16:54:08.442968       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 16:54:08.443011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 16:54:09.105790       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 16:54:09.181332       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 16:54:09.264406       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 16:54:09.272064       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.50.11]
	I0919 16:54:09.273055       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 16:54:09.277463       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 16:54:09.483619       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 16:54:11.054610       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 16:54:11.073713       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 16:54:11.085035       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0919 16:54:23.200424       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0919 16:54:23.247725       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0919 16:55:22.387793       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0919 16:55:22.387864       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0919 16:55:22.389702       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0919 16:55:22.390974       1 timeout.go:142] post-timeout activity - time-elapsed: 3.419126ms, GET "/api/v1/services" result: <nil>
	
	* 
	* ==> kube-controller-manager [54fbef216363] <==
	* I0919 16:55:23.046955       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxjql"
	I0919 16:55:23.055072       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-415589-m02" podCIDRs=["10.244.1.0/24"]
	I0919 16:55:23.055128       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-64m2w"
	I0919 16:55:27.362524       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-415589-m02"
	I0919 16:55:27.363054       1 event.go:307] "Event occurred" object="multinode-415589-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-415589-m02 event: Registered Node multinode-415589-m02 in Controller"
	I0919 16:55:35.615073       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
	I0919 16:55:37.895652       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0919 16:55:37.920772       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9qfss"
	I0919 16:55:37.949699       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rkqh6"
	I0919 16:55:37.962082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.666669ms"
	I0919 16:55:37.981762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.976967ms"
	I0919 16:55:37.981892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.791µs"
	I0919 16:55:38.001890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.922µs"
	I0919 16:55:40.158120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.838252ms"
	I0919 16:55:40.159282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.998µs"
	I0919 16:55:40.901205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.861459ms"
	I0919 16:55:40.902558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.053µs"
	I0919 16:56:13.034837       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
	I0919 16:56:13.036161       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-415589-m03\" does not exist"
	I0919 16:56:13.056038       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pmpvh"
	I0919 16:56:13.056495       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-p8gzq"
	I0919 16:56:13.062541       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-415589-m03" podCIDRs=["10.244.2.0/24"]
	I0919 16:56:17.383762       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-415589-m03"
	I0919 16:56:17.383780       1 event.go:307] "Event occurred" object="multinode-415589-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-415589-m03 event: Registered Node multinode-415589-m03 in Controller"
	I0919 16:56:25.234121       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-415589-m02"
	
	* 
	* ==> kube-proxy [7dafb88f7c1f] <==
	* I0919 16:54:24.550977       1 server_others.go:69] "Using iptables proxy"
	I0919 16:54:24.571064       1 node.go:141] Successfully retrieved node IP: 192.168.50.11
	I0919 16:54:24.657576       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0919 16:54:24.657633       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 16:54:24.660779       1 server_others.go:152] "Using iptables Proxier"
	I0919 16:54:24.661577       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0919 16:54:24.661825       1 server.go:846] "Version info" version="v1.28.2"
	I0919 16:54:24.661836       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 16:54:24.663079       1 config.go:188] "Starting service config controller"
	I0919 16:54:24.663770       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0919 16:54:24.663797       1 config.go:315] "Starting node config controller"
	I0919 16:54:24.663803       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0919 16:54:24.664576       1 config.go:97] "Starting endpoint slice config controller"
	I0919 16:54:24.664585       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0919 16:54:24.764136       1 shared_informer.go:318] Caches are synced for node config
	I0919 16:54:24.764162       1 shared_informer.go:318] Caches are synced for service config
	I0919 16:54:24.765331       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [bfef71d52559] <==
	* W0919 16:54:08.494968       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 16:54:08.495024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0919 16:54:08.543795       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 16:54:08.543850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 16:54:08.621675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 16:54:08.621742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 16:54:08.721489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 16:54:08.721542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 16:54:08.741802       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 16:54:08.741860       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 16:54:08.775933       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 16:54:08.775999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 16:54:08.782134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 16:54:08.782192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0919 16:54:08.791610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 16:54:08.791669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 16:54:08.794144       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 16:54:08.794210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0919 16:54:08.806765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 16:54:08.806826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 16:54:08.866895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 16:54:08.866954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 16:54:09.093117       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 16:54:09.093194       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0919 16:54:11.868094       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 16:53:36 UTC, ends at Tue 2023-09-19 16:57:00 UTC. --
	Sep 19 16:54:27 multinode-415589 kubelet[2458]: I0919 16:54:27.401991    2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r6jtp" podStartSLOduration=4.401770934 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:27.400598693 +0000 UTC m=+16.383746492" watchObservedRunningTime="2023-09-19 16:54:27.401770934 +0000 UTC m=+16.384918730"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.016547    2458 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.051143    2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w9q5z" podStartSLOduration=9.602827124000001 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="2023-09-19 16:54:27.366686702 +0000 UTC m=+16.349834479" lastFinishedPulling="2023-09-19 16:54:30.814966539 +0000 UTC m=+19.798114315" observedRunningTime="2023-09-19 16:54:31.52920746 +0000 UTC m=+20.512355254" watchObservedRunningTime="2023-09-19 16:54:36.05110696 +0000 UTC m=+25.034254801"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.051589    2458 topology_manager.go:215] "Topology Admit Handler" podUID="61db80e1-b248-49b3-aab0-4b70b4b47c51" podNamespace="kube-system" podName="storage-provisioner"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.056853    2458 topology_manager.go:215] "Topology Admit Handler" podUID="d4fcd880-e2ad-4d44-a070-e2af114e5e38" podNamespace="kube-system" podName="coredns-5dd5756b68-ctsv5"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.170661    2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61db80e1-b248-49b3-aab0-4b70b4b47c51-tmp\") pod \"storage-provisioner\" (UID: \"61db80e1-b248-49b3-aab0-4b70b4b47c51\") " pod="kube-system/storage-provisioner"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.170862    2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt2w7\" (UniqueName: \"kubernetes.io/projected/61db80e1-b248-49b3-aab0-4b70b4b47c51-kube-api-access-zt2w7\") pod \"storage-provisioner\" (UID: \"61db80e1-b248-49b3-aab0-4b70b4b47c51\") " pod="kube-system/storage-provisioner"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.171051    2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4fcd880-e2ad-4d44-a070-e2af114e5e38-config-volume\") pod \"coredns-5dd5756b68-ctsv5\" (UID: \"d4fcd880-e2ad-4d44-a070-e2af114e5e38\") " pod="kube-system/coredns-5dd5756b68-ctsv5"
	Sep 19 16:54:36 multinode-415589 kubelet[2458]: I0919 16:54:36.171122    2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mbn\" (UniqueName: \"kubernetes.io/projected/d4fcd880-e2ad-4d44-a070-e2af114e5e38-kube-api-access-f2mbn\") pod \"coredns-5dd5756b68-ctsv5\" (UID: \"d4fcd880-e2ad-4d44-a070-e2af114e5e38\") " pod="kube-system/coredns-5dd5756b68-ctsv5"
	Sep 19 16:54:37 multinode-415589 kubelet[2458]: I0919 16:54:37.070979    2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dc73b0c19acc1823f938bdac00e9aef48901d30a9938252e5bfa445f3b60ab4"
	Sep 19 16:54:37 multinode-415589 kubelet[2458]: I0919 16:54:37.251320    2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48a984b726602555fe6103a682cf7c01cbdc4cfc063e347b37e7b664cd0efd9"
	Sep 19 16:54:38 multinode-415589 kubelet[2458]: I0919 16:54:38.307790    2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.307750753 podCreationTimestamp="2023-09-19 16:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:38.28465408 +0000 UTC m=+27.267801876" watchObservedRunningTime="2023-09-19 16:54:38.307750753 +0000 UTC m=+27.290898532"
	Sep 19 16:54:38 multinode-415589 kubelet[2458]: I0919 16:54:38.308501    2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ctsv5" podStartSLOduration=15.308470382 podCreationTimestamp="2023-09-19 16:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 16:54:38.307594127 +0000 UTC m=+27.290741924" watchObservedRunningTime="2023-09-19 16:54:38.308470382 +0000 UTC m=+27.291618178"
	Sep 19 16:55:11 multinode-415589 kubelet[2458]: E0919 16:55:11.576449    2458 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 16:55:11 multinode-415589 kubelet[2458]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 16:55:11 multinode-415589 kubelet[2458]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 16:55:11 multinode-415589 kubelet[2458]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 19 16:55:37 multinode-415589 kubelet[2458]: I0919 16:55:37.962060    2458 topology_manager.go:215] "Topology Admit Handler" podUID="f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77" podNamespace="default" podName="busybox-5bc68d56bd-rkqh6"
	Sep 19 16:55:38 multinode-415589 kubelet[2458]: I0919 16:55:38.068470    2458 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x75b\" (UniqueName: \"kubernetes.io/projected/f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77-kube-api-access-6x75b\") pod \"busybox-5bc68d56bd-rkqh6\" (UID: \"f7b2cebb-4d8b-43fd-9f27-3f5f0b434f77\") " pod="default/busybox-5bc68d56bd-rkqh6"
	Sep 19 16:55:38 multinode-415589 kubelet[2458]: I0919 16:55:38.836263    2458 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9baecebc6dd1099654601979e6cbbfaa20f3e668e516fe2af70cd5d43fe75ab4"
	Sep 19 16:55:40 multinode-415589 kubelet[2458]: I0919 16:55:40.884569    2458 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-rkqh6" podStartSLOduration=2.751872918 podCreationTimestamp="2023-09-19 16:55:37 +0000 UTC" firstStartedPulling="2023-09-19 16:55:38.877383877 +0000 UTC m=+87.860531653" lastFinishedPulling="2023-09-19 16:55:40.009995277 +0000 UTC m=+88.993143054" observedRunningTime="2023-09-19 16:55:40.88396924 +0000 UTC m=+89.867117037" watchObservedRunningTime="2023-09-19 16:55:40.884484319 +0000 UTC m=+89.867632116"
	Sep 19 16:56:11 multinode-415589 kubelet[2458]: E0919 16:56:11.580544    2458 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 19 16:56:11 multinode-415589 kubelet[2458]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 19 16:56:11 multinode-415589 kubelet[2458]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 19 16:56:11 multinode-415589 kubelet[2458]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-415589 -n multinode-415589
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-415589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (21.67s)

                                                
                                    
x
+
TestScheduledStopUnix (53.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-297595 --memory=2048 --driver=kvm2 
E0919 17:08:17.311840   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-297595 --memory=2048 --driver=kvm2 : (50.679915646s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-297595 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-297595 -n scheduled-stop-297595
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-297595 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 92056 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-09-19 17:08:30.618498233 +0000 UTC m=+2042.956697291
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-297595 -n scheduled-stop-297595
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p scheduled-stop-297595 logs -n 25
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-415589            | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 16:57 UTC | 19 Sep 23 16:58 UTC |
	| start   | -p multinode-415589            | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 16:58 UTC | 19 Sep 23 17:01 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-415589       | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC |                     |
	| node    | multinode-415589 node delete   | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-415589 stop          | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:01 UTC |
	| start   | -p multinode-415589            | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:01 UTC | 19 Sep 23 17:03 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	| node    | list -p multinode-415589       | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:03 UTC |                     |
	| start   | -p multinode-415589-m02        | multinode-415589-m02  | jenkins | v1.31.2 | 19 Sep 23 17:03 UTC |                     |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	| start   | -p multinode-415589-m03        | multinode-415589-m03  | jenkins | v1.31.2 | 19 Sep 23 17:03 UTC | 19 Sep 23 17:04 UTC |
	|         | --driver=kvm2                  |                       |         |         |                     |                     |
	| node    | add -p multinode-415589        | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC |                     |
	| delete  | -p multinode-415589-m03        | multinode-415589-m03  | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC | 19 Sep 23 17:04 UTC |
	| delete  | -p multinode-415589            | multinode-415589      | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC | 19 Sep 23 17:04 UTC |
	| start   | -p test-preload-580479         | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:04 UTC | 19 Sep 23 17:06 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true  |                       |         |         |                     |                     |
	|         | --preload=false --driver=kvm2  |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-580479 image pull | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:06 UTC | 19 Sep 23 17:06 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-580479         | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:06 UTC | 19 Sep 23 17:06 UTC |
	| start   | -p test-preload-580479         | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:06 UTC | 19 Sep 23 17:07 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                       |         |         |                     |                     |
	| image   | test-preload-580479 image list | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:07 UTC | 19 Sep 23 17:07 UTC |
	| delete  | -p test-preload-580479         | test-preload-580479   | jenkins | v1.31.2 | 19 Sep 23 17:07 UTC | 19 Sep 23 17:07 UTC |
	| start   | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:07 UTC | 19 Sep 23 17:08 UTC |
	|         | --memory=2048 --driver=kvm2    |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-297595       | scheduled-stop-297595 | jenkins | v1.31.2 | 19 Sep 23 17:08 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:07:39
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:07:39.642153   91689 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:07:39.642411   91689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:07:39.642415   91689 out.go:309] Setting ErrFile to fd 2...
	I0919 17:07:39.642418   91689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:07:39.642565   91689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 17:07:39.643097   91689 out.go:303] Setting JSON to false
	I0919 17:07:39.643914   91689 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6373,"bootTime":1695136887,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:07:39.643962   91689 start.go:138] virtualization: kvm guest
	I0919 17:07:39.647181   91689 out.go:177] * [scheduled-stop-297595] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:07:39.648700   91689 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:07:39.648763   91689 notify.go:220] Checking for updates...
	I0919 17:07:39.650238   91689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:07:39.651854   91689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:07:39.653208   91689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 17:07:39.654599   91689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:07:39.655850   91689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:07:39.657223   91689 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:07:39.691964   91689 out.go:177] * Using the kvm2 driver based on user configuration
	I0919 17:07:39.693279   91689 start.go:298] selected driver: kvm2
	I0919 17:07:39.693285   91689 start.go:902] validating driver "kvm2" against <nil>
	I0919 17:07:39.693293   91689 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:07:39.694182   91689 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:07:39.694270   91689 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-65689/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:07:39.709426   91689 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:07:39.709465   91689 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 17:07:39.709675   91689 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 17:07:39.709702   91689 cni.go:84] Creating CNI manager for ""
	I0919 17:07:39.709716   91689 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:07:39.709722   91689 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 17:07:39.709727   91689 start_flags.go:321] config:
	{Name:scheduled-stop-297595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:scheduled-stop-297595 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:07:39.709832   91689 iso.go:125] acquiring lock: {Name:mkdf0d42546c83faf1a624ccdb8d9876db7a1a92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:07:39.711542   91689 out.go:177] * Starting control plane node scheduled-stop-297595 in cluster scheduled-stop-297595
	I0919 17:07:39.712970   91689 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 17:07:39.712990   91689 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0919 17:07:39.712996   91689 cache.go:57] Caching tarball of preloaded images
	I0919 17:07:39.713063   91689 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 17:07:39.713068   91689 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 17:07:39.713374   91689 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/config.json ...
	I0919 17:07:39.713388   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/config.json: {Name:mk93908588311d1183d15463354ccac0b0ab1784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:07:39.713508   91689 start.go:365] acquiring machines lock for scheduled-stop-297595: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:07:39.713531   91689 start.go:369] acquired machines lock for "scheduled-stop-297595" in 15.535µs
	I0919 17:07:39.713548   91689 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-297595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:scheduled-stop-297595 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 17:07:39.713598   91689 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 17:07:39.715300   91689 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0919 17:07:39.715404   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:07:39.715438   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:07:39.729006   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0919 17:07:39.729440   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:07:39.730072   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:07:39.730089   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:07:39.730395   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:07:39.730606   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetMachineName
	I0919 17:07:39.730725   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:07:39.730878   91689 start.go:159] libmachine.API.Create for "scheduled-stop-297595" (driver="kvm2")
	I0919 17:07:39.730903   91689 client.go:168] LocalClient.Create starting
	I0919 17:07:39.730928   91689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem
	I0919 17:07:39.730957   91689 main.go:141] libmachine: Decoding PEM data...
	I0919 17:07:39.730970   91689 main.go:141] libmachine: Parsing certificate...
	I0919 17:07:39.731017   91689 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem
	I0919 17:07:39.731032   91689 main.go:141] libmachine: Decoding PEM data...
	I0919 17:07:39.731045   91689 main.go:141] libmachine: Parsing certificate...
	I0919 17:07:39.731059   91689 main.go:141] libmachine: Running pre-create checks...
	I0919 17:07:39.731065   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .PreCreateCheck
	I0919 17:07:39.731463   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetConfigRaw
	I0919 17:07:39.731849   91689 main.go:141] libmachine: Creating machine...
	I0919 17:07:39.731857   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Create
	I0919 17:07:39.731965   91689 main.go:141] libmachine: (scheduled-stop-297595) Creating KVM machine...
	I0919 17:07:39.733158   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found existing default KVM network
	I0919 17:07:39.733867   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:39.733717   91712 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d8:d6:c0} reservation:<nil>}
	I0919 17:07:39.734387   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:39.734319   91712 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f8b0}
	I0919 17:07:39.739372   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | trying to create private KVM network mk-scheduled-stop-297595 192.168.50.0/24...
	I0919 17:07:39.809809   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | private KVM network mk-scheduled-stop-297595 192.168.50.0/24 created
	I0919 17:07:39.809830   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting up store path in /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595 ...
	I0919 17:07:39.809846   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:39.809780   91712 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 17:07:39.809937   91689 main.go:141] libmachine: (scheduled-stop-297595) Building disk image from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 17:07:39.809979   91689 main.go:141] libmachine: (scheduled-stop-297595) Downloading /home/jenkins/minikube-integration/17240-65689/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0919 17:07:40.033250   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:40.033098   91712 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa...
	I0919 17:07:40.170867   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:40.170729   91712 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/scheduled-stop-297595.rawdisk...
	I0919 17:07:40.170894   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Writing magic tar header
	I0919 17:07:40.170906   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Writing SSH key tar header
	I0919 17:07:40.170914   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:40.170840   91712 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595 ...
	I0919 17:07:40.172169   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595
	I0919 17:07:40.172190   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube/machines
	I0919 17:07:40.172222   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595 (perms=drwx------)
	I0919 17:07:40.172234   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 17:07:40.172241   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube/machines (perms=drwxr-xr-x)
	I0919 17:07:40.172250   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689/.minikube (perms=drwxr-xr-x)
	I0919 17:07:40.172256   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins/minikube-integration/17240-65689 (perms=drwxrwxr-x)
	I0919 17:07:40.172264   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 17:07:40.172270   91689 main.go:141] libmachine: (scheduled-stop-297595) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 17:07:40.172288   91689 main.go:141] libmachine: (scheduled-stop-297595) Creating domain...
	I0919 17:07:40.172298   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17240-65689
	I0919 17:07:40.172316   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0919 17:07:40.172322   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home/jenkins
	I0919 17:07:40.172328   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Checking permissions on dir: /home
	I0919 17:07:40.172333   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Skipping /home - not owner
	I0919 17:07:40.173376   91689 main.go:141] libmachine: (scheduled-stop-297595) define libvirt domain using xml: 
	I0919 17:07:40.173392   91689 main.go:141] libmachine: (scheduled-stop-297595) <domain type='kvm'>
	I0919 17:07:40.173399   91689 main.go:141] libmachine: (scheduled-stop-297595)   <name>scheduled-stop-297595</name>
	I0919 17:07:40.173404   91689 main.go:141] libmachine: (scheduled-stop-297595)   <memory unit='MiB'>2048</memory>
	I0919 17:07:40.173409   91689 main.go:141] libmachine: (scheduled-stop-297595)   <vcpu>2</vcpu>
	I0919 17:07:40.173413   91689 main.go:141] libmachine: (scheduled-stop-297595)   <features>
	I0919 17:07:40.173418   91689 main.go:141] libmachine: (scheduled-stop-297595)     <acpi/>
	I0919 17:07:40.173422   91689 main.go:141] libmachine: (scheduled-stop-297595)     <apic/>
	I0919 17:07:40.173427   91689 main.go:141] libmachine: (scheduled-stop-297595)     <pae/>
	I0919 17:07:40.173435   91689 main.go:141] libmachine: (scheduled-stop-297595)     
	I0919 17:07:40.173439   91689 main.go:141] libmachine: (scheduled-stop-297595)   </features>
	I0919 17:07:40.173444   91689 main.go:141] libmachine: (scheduled-stop-297595)   <cpu mode='host-passthrough'>
	I0919 17:07:40.173448   91689 main.go:141] libmachine: (scheduled-stop-297595)   
	I0919 17:07:40.173452   91689 main.go:141] libmachine: (scheduled-stop-297595)   </cpu>
	I0919 17:07:40.173470   91689 main.go:141] libmachine: (scheduled-stop-297595)   <os>
	I0919 17:07:40.173474   91689 main.go:141] libmachine: (scheduled-stop-297595)     <type>hvm</type>
	I0919 17:07:40.173479   91689 main.go:141] libmachine: (scheduled-stop-297595)     <boot dev='cdrom'/>
	I0919 17:07:40.173484   91689 main.go:141] libmachine: (scheduled-stop-297595)     <boot dev='hd'/>
	I0919 17:07:40.173489   91689 main.go:141] libmachine: (scheduled-stop-297595)     <bootmenu enable='no'/>
	I0919 17:07:40.173493   91689 main.go:141] libmachine: (scheduled-stop-297595)   </os>
	I0919 17:07:40.173498   91689 main.go:141] libmachine: (scheduled-stop-297595)   <devices>
	I0919 17:07:40.173502   91689 main.go:141] libmachine: (scheduled-stop-297595)     <disk type='file' device='cdrom'>
	I0919 17:07:40.173510   91689 main.go:141] libmachine: (scheduled-stop-297595)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/boot2docker.iso'/>
	I0919 17:07:40.173515   91689 main.go:141] libmachine: (scheduled-stop-297595)       <target dev='hdc' bus='scsi'/>
	I0919 17:07:40.173520   91689 main.go:141] libmachine: (scheduled-stop-297595)       <readonly/>
	I0919 17:07:40.173524   91689 main.go:141] libmachine: (scheduled-stop-297595)     </disk>
	I0919 17:07:40.173534   91689 main.go:141] libmachine: (scheduled-stop-297595)     <disk type='file' device='disk'>
	I0919 17:07:40.173540   91689 main.go:141] libmachine: (scheduled-stop-297595)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 17:07:40.173551   91689 main.go:141] libmachine: (scheduled-stop-297595)       <source file='/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/scheduled-stop-297595.rawdisk'/>
	I0919 17:07:40.173561   91689 main.go:141] libmachine: (scheduled-stop-297595)       <target dev='hda' bus='virtio'/>
	I0919 17:07:40.173566   91689 main.go:141] libmachine: (scheduled-stop-297595)     </disk>
	I0919 17:07:40.173570   91689 main.go:141] libmachine: (scheduled-stop-297595)     <interface type='network'>
	I0919 17:07:40.173579   91689 main.go:141] libmachine: (scheduled-stop-297595)       <source network='mk-scheduled-stop-297595'/>
	I0919 17:07:40.173586   91689 main.go:141] libmachine: (scheduled-stop-297595)       <model type='virtio'/>
	I0919 17:07:40.173591   91689 main.go:141] libmachine: (scheduled-stop-297595)     </interface>
	I0919 17:07:40.173595   91689 main.go:141] libmachine: (scheduled-stop-297595)     <interface type='network'>
	I0919 17:07:40.173601   91689 main.go:141] libmachine: (scheduled-stop-297595)       <source network='default'/>
	I0919 17:07:40.173605   91689 main.go:141] libmachine: (scheduled-stop-297595)       <model type='virtio'/>
	I0919 17:07:40.173610   91689 main.go:141] libmachine: (scheduled-stop-297595)     </interface>
	I0919 17:07:40.173636   91689 main.go:141] libmachine: (scheduled-stop-297595)     <serial type='pty'>
	I0919 17:07:40.173662   91689 main.go:141] libmachine: (scheduled-stop-297595)       <target port='0'/>
	I0919 17:07:40.173677   91689 main.go:141] libmachine: (scheduled-stop-297595)     </serial>
	I0919 17:07:40.173683   91689 main.go:141] libmachine: (scheduled-stop-297595)     <console type='pty'>
	I0919 17:07:40.173688   91689 main.go:141] libmachine: (scheduled-stop-297595)       <target type='serial' port='0'/>
	I0919 17:07:40.173694   91689 main.go:141] libmachine: (scheduled-stop-297595)     </console>
	I0919 17:07:40.173698   91689 main.go:141] libmachine: (scheduled-stop-297595)     <rng model='virtio'>
	I0919 17:07:40.173705   91689 main.go:141] libmachine: (scheduled-stop-297595)       <backend model='random'>/dev/random</backend>
	I0919 17:07:40.173709   91689 main.go:141] libmachine: (scheduled-stop-297595)     </rng>
	I0919 17:07:40.173714   91689 main.go:141] libmachine: (scheduled-stop-297595)     
	I0919 17:07:40.173718   91689 main.go:141] libmachine: (scheduled-stop-297595)     
	I0919 17:07:40.173723   91689 main.go:141] libmachine: (scheduled-stop-297595)   </devices>
	I0919 17:07:40.173727   91689 main.go:141] libmachine: (scheduled-stop-297595) </domain>
	I0919 17:07:40.173735   91689 main.go:141] libmachine: (scheduled-stop-297595) 
	I0919 17:07:40.177959   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:52:78:d6 in network default
	I0919 17:07:40.178563   91689 main.go:141] libmachine: (scheduled-stop-297595) Ensuring networks are active...
	I0919 17:07:40.178585   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:40.179229   91689 main.go:141] libmachine: (scheduled-stop-297595) Ensuring network default is active
	I0919 17:07:40.179522   91689 main.go:141] libmachine: (scheduled-stop-297595) Ensuring network mk-scheduled-stop-297595 is active
	I0919 17:07:40.180044   91689 main.go:141] libmachine: (scheduled-stop-297595) Getting domain xml...
	I0919 17:07:40.180762   91689 main.go:141] libmachine: (scheduled-stop-297595) Creating domain...
	I0919 17:07:41.394543   91689 main.go:141] libmachine: (scheduled-stop-297595) Waiting to get IP...
	I0919 17:07:41.395302   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:41.395693   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:41.395715   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:41.395660   91712 retry.go:31] will retry after 266.217208ms: waiting for machine to come up
	I0919 17:07:41.663119   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:41.663560   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:41.663576   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:41.663512   91712 retry.go:31] will retry after 348.857667ms: waiting for machine to come up
	I0919 17:07:42.014031   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:42.014455   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:42.014481   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:42.014390   91712 retry.go:31] will retry after 415.926424ms: waiting for machine to come up
	I0919 17:07:42.431643   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:42.431956   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:42.432013   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:42.431897   91712 retry.go:31] will retry after 543.853116ms: waiting for machine to come up
	I0919 17:07:42.977570   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:42.978140   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:42.978165   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:42.978084   91712 retry.go:31] will retry after 529.247415ms: waiting for machine to come up
	I0919 17:07:43.508750   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:43.509355   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:43.509390   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:43.509289   91712 retry.go:31] will retry after 649.666026ms: waiting for machine to come up
	I0919 17:07:44.160063   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:44.160519   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:44.160543   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:44.160475   91712 retry.go:31] will retry after 792.931816ms: waiting for machine to come up
	I0919 17:07:44.954630   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:44.955048   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:44.955079   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:44.954990   91712 retry.go:31] will retry after 1.293710664s: waiting for machine to come up
	I0919 17:07:46.250510   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:46.250957   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:46.250979   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:46.250896   91712 retry.go:31] will retry after 1.185330646s: waiting for machine to come up
	I0919 17:07:47.438406   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:47.438804   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:47.438827   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:47.438755   91712 retry.go:31] will retry after 1.699050583s: waiting for machine to come up
	I0919 17:07:49.140874   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:49.141332   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:49.141354   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:49.141287   91712 retry.go:31] will retry after 2.668063936s: waiting for machine to come up
	I0919 17:07:51.811884   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:51.812393   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:51.812416   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:51.812359   91712 retry.go:31] will retry after 2.491688995s: waiting for machine to come up
	I0919 17:07:54.306954   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:54.307348   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:54.307372   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:54.307299   91712 retry.go:31] will retry after 4.276108862s: waiting for machine to come up
	I0919 17:07:58.584657   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:07:58.585026   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find current IP address of domain scheduled-stop-297595 in network mk-scheduled-stop-297595
	I0919 17:07:58.585047   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | I0919 17:07:58.584980   91712 retry.go:31] will retry after 3.656545768s: waiting for machine to come up
	I0919 17:08:02.244531   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.245019   91689 main.go:141] libmachine: (scheduled-stop-297595) Found IP for machine: 192.168.50.92
	I0919 17:08:02.245033   91689 main.go:141] libmachine: (scheduled-stop-297595) Reserving static IP address...
	I0919 17:08:02.245047   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has current primary IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.245431   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | unable to find host DHCP lease matching {name: "scheduled-stop-297595", mac: "52:54:00:86:95:15", ip: "192.168.50.92"} in network mk-scheduled-stop-297595
	I0919 17:08:02.316683   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Getting to WaitForSSH function...
	I0919 17:08:02.316706   91689 main.go:141] libmachine: (scheduled-stop-297595) Reserved static IP address: 192.168.50.92
	I0919 17:08:02.316725   91689 main.go:141] libmachine: (scheduled-stop-297595) Waiting for SSH to be available...
	I0919 17:08:02.319092   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.319531   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:minikube Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.319568   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.319686   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Using SSH client type: external
	I0919 17:08:02.319722   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa (-rw-------)
	I0919 17:08:02.319749   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.92 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:08:02.319762   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | About to run SSH command:
	I0919 17:08:02.319774   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | exit 0
	I0919 17:08:02.405097   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | SSH cmd err, output: <nil>: 
	I0919 17:08:02.405335   91689 main.go:141] libmachine: (scheduled-stop-297595) KVM machine creation complete!
	I0919 17:08:02.405686   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetConfigRaw
	I0919 17:08:02.406212   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:02.406391   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:02.406522   91689 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 17:08:02.406534   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetState
	I0919 17:08:02.407882   91689 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 17:08:02.407890   91689 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 17:08:02.407895   91689 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 17:08:02.407901   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.410163   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.410515   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.410539   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.410636   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:02.410821   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.410944   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.411092   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:02.411232   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:02.411570   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:02.411576   91689 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 17:08:02.520663   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:08:02.520691   91689 main.go:141] libmachine: Detecting the provisioner...
	I0919 17:08:02.520702   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.523690   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.524097   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.524120   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.524307   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:02.524537   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.524741   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.524893   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:02.525092   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:02.525550   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:02.525564   91689 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 17:08:02.630170   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0919 17:08:02.630240   91689 main.go:141] libmachine: found compatible host: buildroot
	I0919 17:08:02.630249   91689 main.go:141] libmachine: Provisioning with buildroot...
	I0919 17:08:02.630259   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetMachineName
	I0919 17:08:02.630538   91689 buildroot.go:166] provisioning hostname "scheduled-stop-297595"
	I0919 17:08:02.630560   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetMachineName
	I0919 17:08:02.630767   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.633086   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.633394   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.633431   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.633529   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:02.633712   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.633840   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.633968   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:02.634137   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:02.634586   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:02.634601   91689 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-297595 && echo "scheduled-stop-297595" | sudo tee /etc/hostname
	I0919 17:08:02.752759   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-297595
	
	I0919 17:08:02.752782   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.755724   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.756031   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.756050   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.756275   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:02.756462   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.756672   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.756886   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:02.757051   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:02.757414   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:02.757427   91689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-297595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-297595/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-297595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:08:02.873158   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:08:02.873178   91689 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 17:08:02.873197   91689 buildroot.go:174] setting up certificates
	I0919 17:08:02.873218   91689 provision.go:83] configureAuth start
	I0919 17:08:02.873226   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetMachineName
	I0919 17:08:02.873545   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetIP
	I0919 17:08:02.876218   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.876533   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.876552   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.876799   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.879160   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.879524   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.879547   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.879710   91689 provision.go:138] copyHostCerts
	I0919 17:08:02.879757   91689 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 17:08:02.879763   91689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 17:08:02.879825   91689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 17:08:02.879956   91689 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 17:08:02.879960   91689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 17:08:02.879985   91689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 17:08:02.880034   91689 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 17:08:02.880037   91689 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 17:08:02.880055   91689 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 17:08:02.880093   91689 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-297595 san=[192.168.50.92 192.168.50.92 localhost 127.0.0.1 minikube scheduled-stop-297595]
	I0919 17:08:02.948413   91689 provision.go:172] copyRemoteCerts
	I0919 17:08:02.948457   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:08:02.948478   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:02.951147   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.951460   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:02.951489   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:02.951608   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:02.951779   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:02.951907   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:02.952014   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:03.036688   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 17:08:03.059226   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:08:03.079935   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:08:03.101070   91689 provision.go:86] duration metric: configureAuth took 227.837528ms
	I0919 17:08:03.101093   91689 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:08:03.101266   91689 config.go:182] Loaded profile config "scheduled-stop-297595": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:08:03.101299   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:03.101605   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:03.104065   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.104414   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:03.104434   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.104565   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:03.104764   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.104939   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.105121   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:03.105299   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:03.105738   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:03.105746   91689 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 17:08:03.214733   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 17:08:03.214747   91689 buildroot.go:70] root file system type: tmpfs
	I0919 17:08:03.214852   91689 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 17:08:03.214865   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:03.217601   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.217927   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:03.217951   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.218067   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:03.218287   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.218437   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.218574   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:03.218701   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:03.218993   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:03.219050   91689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 17:08:03.338033   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 17:08:03.338062   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:03.340691   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.341039   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:03.341068   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:03.341232   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:03.341409   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.341542   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:03.341655   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:03.341787   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:03.342109   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:03.342132   91689 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 17:08:04.082869   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 17:08:04.082899   91689 main.go:141] libmachine: Checking connection to Docker...
	I0919 17:08:04.082912   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetURL
	I0919 17:08:04.084094   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Using libvirt version 6000000
	I0919 17:08:04.086446   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.086829   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.086857   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.087009   91689 main.go:141] libmachine: Docker is up and running!
	I0919 17:08:04.087020   91689 main.go:141] libmachine: Reticulating splines...
	I0919 17:08:04.087037   91689 client.go:171] LocalClient.Create took 24.356116923s
	I0919 17:08:04.087057   91689 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-297595" took 24.356182204s
	I0919 17:08:04.087063   91689 start.go:300] post-start starting for "scheduled-stop-297595" (driver="kvm2")
	I0919 17:08:04.087072   91689 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:08:04.087095   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:04.087337   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:08:04.087353   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:04.089330   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.089604   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.089639   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.089773   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:04.089939   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:04.090096   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:04.090221   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:04.171406   91689 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:08:04.175287   91689 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:08:04.175299   91689 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 17:08:04.175357   91689 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 17:08:04.175419   91689 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 17:08:04.175493   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:08:04.184070   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:08:04.206093   91689 start.go:303] post-start completed in 119.017019ms
	I0919 17:08:04.206140   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetConfigRaw
	I0919 17:08:04.206756   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetIP
	I0919 17:08:04.209249   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.209569   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.209582   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.209804   91689 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/config.json ...
	I0919 17:08:04.209955   91689 start.go:128] duration metric: createHost completed in 24.496349924s
	I0919 17:08:04.209968   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:04.212266   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.212588   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.212612   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.212754   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:04.212931   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:04.213066   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:04.213154   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:04.213252   91689 main.go:141] libmachine: Using SSH client type: native
	I0919 17:08:04.213662   91689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.92 22 <nil> <nil>}
	I0919 17:08:04.213668   91689 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:08:04.322284   91689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695143284.292159147
	
	I0919 17:08:04.322297   91689 fix.go:206] guest clock: 1695143284.292159147
	I0919 17:08:04.322304   91689 fix.go:219] Guest: 2023-09-19 17:08:04.292159147 +0000 UTC Remote: 2023-09-19 17:08:04.209959854 +0000 UTC m=+24.597437567 (delta=82.199293ms)
	I0919 17:08:04.322341   91689 fix.go:190] guest clock delta is within tolerance: 82.199293ms
	I0919 17:08:04.322346   91689 start.go:83] releasing machines lock for "scheduled-stop-297595", held for 24.60881025s
	I0919 17:08:04.322371   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:04.322654   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetIP
	I0919 17:08:04.325186   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.325580   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.325607   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.325766   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:04.326315   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:04.326494   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:04.326561   91689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:08:04.326598   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:04.326694   91689 ssh_runner.go:195] Run: cat /version.json
	I0919 17:08:04.326713   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:04.329064   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.329195   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.329416   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.329441   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.329467   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:04.329501   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:04.329555   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:04.329748   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:04.329790   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:04.329888   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:04.329947   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:04.330018   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:04.330041   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:04.330155   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:04.410442   91689 ssh_runner.go:195] Run: systemctl --version
	I0919 17:08:04.431433   91689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:08:04.436619   91689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:08:04.436678   91689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:08:04.452657   91689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:08:04.452671   91689 start.go:469] detecting cgroup driver to use...
	I0919 17:08:04.452934   91689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:08:04.468440   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 17:08:04.478117   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 17:08:04.487895   91689 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 17:08:04.487945   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 17:08:04.497966   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:08:04.507508   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 17:08:04.516915   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:08:04.526886   91689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:08:04.536258   91689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 17:08:04.546147   91689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:08:04.555124   91689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:08:04.564153   91689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:08:04.662101   91689 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 17:08:04.678914   91689 start.go:469] detecting cgroup driver to use...
	I0919 17:08:04.678998   91689 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 17:08:04.693151   91689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:08:04.705083   91689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:08:04.723710   91689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:08:04.735830   91689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:08:04.747813   91689 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 17:08:04.781897   91689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:08:04.793808   91689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:08:04.811097   91689 ssh_runner.go:195] Run: which cri-dockerd
	I0919 17:08:04.814835   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 17:08:04.822746   91689 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 17:08:04.838330   91689 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 17:08:04.935413   91689 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 17:08:05.036266   91689 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 17:08:05.036289   91689 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 17:08:05.052226   91689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:08:05.148254   91689 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 17:08:06.526913   91689 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.378618723s)
	I0919 17:08:06.526978   91689 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 17:08:06.625096   91689 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 17:08:06.730080   91689 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 17:08:06.841896   91689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:08:06.951440   91689 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 17:08:06.967333   91689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:08:07.076482   91689 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 17:08:07.159978   91689 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 17:08:07.160031   91689 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 17:08:07.165923   91689 start.go:537] Will wait 60s for crictl version
	I0919 17:08:07.165962   91689 ssh_runner.go:195] Run: which crictl
	I0919 17:08:07.169784   91689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:08:07.226139   91689 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 17:08:07.226207   91689 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:08:07.251983   91689 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:08:07.278172   91689 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 17:08:07.278217   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetIP
	I0919 17:08:07.280837   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:07.281158   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:07.281185   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:07.281390   91689 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0919 17:08:07.285102   91689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:08:07.297041   91689 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 17:08:07.297084   91689 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:08:07.314227   91689 docker.go:636] Got preloaded images: 
	I0919 17:08:07.314235   91689 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I0919 17:08:07.314271   91689 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 17:08:07.323053   91689 ssh_runner.go:195] Run: which lz4
	I0919 17:08:07.326337   91689 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:08:07.330005   91689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:08:07.330023   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422207204 bytes)
	I0919 17:08:08.933892   91689 docker.go:600] Took 1.607570 seconds to copy over tarball
	I0919 17:08:08.933963   91689 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:08:11.385757   91689 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.451763525s)
	I0919 17:08:11.385773   91689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:08:11.423393   91689 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 17:08:11.432704   91689 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0919 17:08:11.448045   91689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:08:11.552348   91689 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 17:08:14.807053   91689 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.254678377s)
	I0919 17:08:14.807136   91689 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:08:14.827055   91689 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0919 17:08:14.827072   91689 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:08:14.827127   91689 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 17:08:14.858331   91689 cni.go:84] Creating CNI manager for ""
	I0919 17:08:14.858346   91689 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:08:14.858366   91689 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:08:14.858398   91689 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.92 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-297595 NodeName:scheduled-stop-297595 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:08:14.858545   91689 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-297595"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:08:14.858632   91689 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=scheduled-stop-297595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:scheduled-stop-297595 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:08:14.858697   91689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:08:14.867573   91689 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:08:14.867628   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:08:14.875496   91689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0919 17:08:14.890501   91689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:08:14.905415   91689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0919 17:08:14.920469   91689 ssh_runner.go:195] Run: grep 192.168.50.92	control-plane.minikube.internal$ /etc/hosts
	I0919 17:08:14.923948   91689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:08:14.935468   91689 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595 for IP: 192.168.50.92
	I0919 17:08:14.935484   91689 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:14.935633   91689 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 17:08:14.935663   91689 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 17:08:14.935698   91689 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.key
	I0919 17:08:14.935706   91689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.crt with IP's: []
	I0919 17:08:15.112028   91689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.crt ...
	I0919 17:08:15.112043   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.crt: {Name:mkf413d78193565afc8a5335c9bebd166a588db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.112235   91689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.key ...
	I0919 17:08:15.112241   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/client.key: {Name:mke6f9bce6b2e7ac6955a6f8f21a19d5b54bf562 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.112320   91689 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key.c9035cc7
	I0919 17:08:15.112330   91689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt.c9035cc7 with IP's: [192.168.50.92 10.96.0.1 127.0.0.1 10.0.0.1]
	I0919 17:08:15.411546   91689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt.c9035cc7 ...
	I0919 17:08:15.411561   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt.c9035cc7: {Name:mk99218c661119bafdb7d45550796d3972eb298c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.411719   91689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key.c9035cc7 ...
	I0919 17:08:15.411724   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key.c9035cc7: {Name:mkdf5d328399d7ce61145931e8c24c513d132de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.411788   91689 certs.go:337] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt.c9035cc7 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt
	I0919 17:08:15.411847   91689 certs.go:341] copying /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key.c9035cc7 -> /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key
	I0919 17:08:15.411895   91689 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.key
	I0919 17:08:15.411904   91689 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.crt with IP's: []
	I0919 17:08:15.666471   91689 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.crt ...
	I0919 17:08:15.666486   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.crt: {Name:mk453f9a81415cb83bc2ec3a16a24ca3f72406fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.666670   91689 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.key ...
	I0919 17:08:15.666685   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.key: {Name:mka108ad5449a6d0803d640cab4ab5b2c4cb9187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:15.666840   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 17:08:15.666872   91689 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 17:08:15.666879   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:08:15.666899   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 17:08:15.666918   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:08:15.666936   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 17:08:15.666966   91689 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:08:15.667514   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:08:15.691662   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:08:15.714519   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:08:15.736503   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/scheduled-stop-297595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 17:08:15.758316   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:08:15.780156   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 17:08:15.801980   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:08:15.823990   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 17:08:15.846339   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 17:08:15.871590   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:08:15.892587   91689 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 17:08:15.914219   91689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:08:15.929427   91689 ssh_runner.go:195] Run: openssl version
	I0919 17:08:15.934402   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 17:08:15.943325   91689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 17:08:15.947696   91689 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 17:08:15.947740   91689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 17:08:15.952990   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:08:15.962027   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:08:15.970987   91689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:08:15.975261   91689 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:08:15.975302   91689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:08:15.980354   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:08:15.989390   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 17:08:15.998237   91689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 17:08:16.002619   91689 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 17:08:16.002662   91689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 17:08:16.007949   91689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 17:08:16.017097   91689 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:08:16.020924   91689 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0919 17:08:16.020966   91689 kubeadm.go:404] StartCluster: {Name:scheduled-stop-297595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.2 ClusterName:scheduled-stop-297595 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:08:16.021061   91689 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:08:16.039628   91689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:08:16.047996   91689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:08:16.055977   91689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:08:16.064119   91689 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:08:16.064154   91689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 17:08:16.182446   91689 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0919 17:08:16.182484   91689 kubeadm.go:322] [preflight] Running pre-flight checks
	I0919 17:08:16.457703   91689 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 17:08:16.457809   91689 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 17:08:16.457930   91689 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0919 17:08:16.804585   91689 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 17:08:16.807896   91689 out.go:204]   - Generating certificates and keys ...
	I0919 17:08:16.808007   91689 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0919 17:08:16.808107   91689 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0919 17:08:17.044931   91689 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 17:08:17.109275   91689 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0919 17:08:17.319469   91689 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0919 17:08:17.414453   91689 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0919 17:08:17.571156   91689 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0919 17:08:17.571313   91689 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-297595] and IPs [192.168.50.92 127.0.0.1 ::1]
	I0919 17:08:17.669569   91689 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0919 17:08:17.669806   91689 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-297595] and IPs [192.168.50.92 127.0.0.1 ::1]
	I0919 17:08:17.811367   91689 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 17:08:17.936387   91689 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 17:08:18.163694   91689 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0919 17:08:18.163779   91689 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 17:08:18.613464   91689 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 17:08:18.894672   91689 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 17:08:19.062169   91689 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 17:08:19.303340   91689 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 17:08:19.304136   91689 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 17:08:19.309527   91689 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 17:08:19.311927   91689 out.go:204]   - Booting up control plane ...
	I0919 17:08:19.312090   91689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 17:08:19.312177   91689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 17:08:19.312258   91689 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 17:08:19.326588   91689 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 17:08:19.327382   91689 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 17:08:19.327434   91689 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0919 17:08:19.449095   91689 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0919 17:08:26.452306   91689 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.006268 seconds
	I0919 17:08:26.453111   91689 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 17:08:26.470892   91689 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 17:08:27.007644   91689 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 17:08:27.007842   91689 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-297595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 17:08:27.521501   91689 kubeadm.go:322] [bootstrap-token] Using token: zhiuun.pai45gho5gmlmz2o
	I0919 17:08:27.522952   91689 out.go:204]   - Configuring RBAC rules ...
	I0919 17:08:27.523082   91689 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 17:08:27.529404   91689 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 17:08:27.543902   91689 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 17:08:27.548713   91689 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 17:08:27.554756   91689 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 17:08:27.564420   91689 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 17:08:27.579427   91689 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 17:08:27.832022   91689 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0919 17:08:27.942707   91689 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0919 17:08:27.948885   91689 kubeadm.go:322] 
	I0919 17:08:27.948939   91689 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0919 17:08:27.948943   91689 kubeadm.go:322] 
	I0919 17:08:27.949014   91689 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0919 17:08:27.949019   91689 kubeadm.go:322] 
	I0919 17:08:27.949044   91689 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0919 17:08:27.949116   91689 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 17:08:27.949155   91689 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 17:08:27.949159   91689 kubeadm.go:322] 
	I0919 17:08:27.949209   91689 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0919 17:08:27.949214   91689 kubeadm.go:322] 
	I0919 17:08:27.949279   91689 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 17:08:27.949294   91689 kubeadm.go:322] 
	I0919 17:08:27.949340   91689 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0919 17:08:27.949451   91689 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 17:08:27.949543   91689 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 17:08:27.949549   91689 kubeadm.go:322] 
	I0919 17:08:27.949675   91689 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 17:08:27.949735   91689 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0919 17:08:27.949738   91689 kubeadm.go:322] 
	I0919 17:08:27.949836   91689 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zhiuun.pai45gho5gmlmz2o \
	I0919 17:08:27.949954   91689 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 \
	I0919 17:08:27.949976   91689 kubeadm.go:322] 	--control-plane 
	I0919 17:08:27.949981   91689 kubeadm.go:322] 
	I0919 17:08:27.950068   91689 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0919 17:08:27.950074   91689 kubeadm.go:322] 
	I0919 17:08:27.950158   91689 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zhiuun.pai45gho5gmlmz2o \
	I0919 17:08:27.950268   91689 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f578345f21d61b70dd299dc2a715bc70c42e620a22beab30c3294ae8bc341510 
	I0919 17:08:27.951098   91689 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 17:08:27.951338   91689 cni.go:84] Creating CNI manager for ""
	I0919 17:08:27.951356   91689 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:08:27.953162   91689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:08:27.954957   91689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:08:27.980535   91689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:08:28.004888   91689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:08:28.004989   91689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:08:28.005032   91689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986 minikube.k8s.io/name=scheduled-stop-297595 minikube.k8s.io/updated_at=2023_09_19T17_08_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 17:08:28.444325   91689 ops.go:34] apiserver oom_adj: -16
	I0919 17:08:28.444382   91689 kubeadm.go:1081] duration metric: took 439.457516ms to wait for elevateKubeSystemPrivileges.
	I0919 17:08:28.444396   91689 kubeadm.go:406] StartCluster complete in 12.423434823s
	I0919 17:08:28.444419   91689 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:28.444495   91689 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:08:28.445106   91689 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:08:28.445279   91689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:08:28.445445   91689 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:08:28.445497   91689 config.go:182] Loaded profile config "scheduled-stop-297595": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:08:28.445516   91689 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-297595"
	I0919 17:08:28.445536   91689 addons.go:231] Setting addon storage-provisioner=true in "scheduled-stop-297595"
	I0919 17:08:28.445534   91689 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-297595"
	I0919 17:08:28.445552   91689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-297595"
	I0919 17:08:28.445591   91689 host.go:66] Checking if "scheduled-stop-297595" exists ...
	I0919 17:08:28.446045   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:08:28.446057   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:08:28.446069   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:08:28.446079   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:08:28.460773   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34313
	I0919 17:08:28.461282   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:08:28.461403   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38329
	I0919 17:08:28.461812   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:08:28.461824   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:08:28.461870   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:08:28.462167   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:08:28.462325   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetState
	I0919 17:08:28.462323   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:08:28.462344   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:08:28.462675   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:08:28.463154   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:08:28.463186   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:08:28.475661   91689 addons.go:231] Setting addon default-storageclass=true in "scheduled-stop-297595"
	I0919 17:08:28.475695   91689 host.go:66] Checking if "scheduled-stop-297595" exists ...
	I0919 17:08:28.476097   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:08:28.476128   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:08:28.478559   91689 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-297595" context rescaled to 1 replicas
	I0919 17:08:28.478588   91689 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.92 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 17:08:28.481129   91689 out.go:177] * Verifying Kubernetes components...
	I0919 17:08:28.478633   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I0919 17:08:28.482491   91689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:08:28.482865   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:08:28.483408   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:08:28.483424   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:08:28.483757   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:08:28.483971   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetState
	I0919 17:08:28.485652   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:28.487356   91689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:08:28.488802   91689 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:08:28.488814   91689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:08:28.488832   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:28.491762   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I0919 17:08:28.492189   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:08:28.492289   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:28.492672   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:08:28.492689   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:08:28.492824   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:28.492849   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:28.493048   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:28.493123   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:08:28.493250   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:28.493399   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:28.493571   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:28.493694   91689 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:08:28.493724   91689 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:08:28.508377   91689 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0919 17:08:28.508760   91689 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:08:28.509227   91689 main.go:141] libmachine: Using API Version  1
	I0919 17:08:28.509247   91689 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:08:28.509575   91689 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:08:28.509762   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetState
	I0919 17:08:28.511553   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .DriverName
	I0919 17:08:28.511787   91689 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:08:28.511795   91689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:08:28.511808   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHHostname
	I0919 17:08:28.514903   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:28.515340   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:95:15", ip: ""} in network mk-scheduled-stop-297595: {Iface:virbr2 ExpiryTime:2023-09-19 18:07:55 +0000 UTC Type:0 Mac:52:54:00:86:95:15 Iaid: IPaddr:192.168.50.92 Prefix:24 Hostname:scheduled-stop-297595 Clientid:01:52:54:00:86:95:15}
	I0919 17:08:28.515359   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | domain scheduled-stop-297595 has defined IP address 192.168.50.92 and MAC address 52:54:00:86:95:15 in network mk-scheduled-stop-297595
	I0919 17:08:28.515525   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHPort
	I0919 17:08:28.515686   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHKeyPath
	I0919 17:08:28.515856   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .GetSSHUsername
	I0919 17:08:28.515986   91689 sshutil.go:53] new ssh client: &{IP:192.168.50.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/scheduled-stop-297595/id_rsa Username:docker}
	I0919 17:08:28.640503   91689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:08:28.663460   91689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:08:28.673367   91689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 17:08:28.673927   91689 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:08:28.673958   91689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:08:30.206600   91689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.566059702s)
	I0919 17:08:30.206642   91689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.543160019s)
	I0919 17:08:30.206661   91689 main.go:141] libmachine: Making call to close driver server
	I0919 17:08:30.206674   91689 main.go:141] libmachine: Making call to close driver server
	I0919 17:08:30.206676   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Close
	I0919 17:08:30.206683   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Close
	I0919 17:08:30.206759   91689 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.533372107s)
	I0919 17:08:30.206777   91689 start.go:917] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0919 17:08:30.206831   91689 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.532857776s)
	I0919 17:08:30.206844   91689 api_server.go:72] duration metric: took 1.728230879s to wait for apiserver process to appear ...
	I0919 17:08:30.206847   91689 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:08:30.206863   91689 api_server.go:253] Checking apiserver healthz at https://192.168.50.92:8443/healthz ...
	I0919 17:08:30.207116   91689 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:08:30.207120   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Closing plugin on server side
	I0919 17:08:30.207126   91689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:08:30.207193   91689 main.go:141] libmachine: Making call to close driver server
	I0919 17:08:30.207215   91689 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:08:30.207217   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Close
	I0919 17:08:30.207226   91689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:08:30.207242   91689 main.go:141] libmachine: Making call to close driver server
	I0919 17:08:30.207251   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Close
	I0919 17:08:30.207497   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Closing plugin on server side
	I0919 17:08:30.207516   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Closing plugin on server side
	I0919 17:08:30.207550   91689 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:08:30.207560   91689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:08:30.207572   91689 main.go:141] libmachine: Making call to close driver server
	I0919 17:08:30.207579   91689 main.go:141] libmachine: (scheduled-stop-297595) Calling .Close
	I0919 17:08:30.207800   91689 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:08:30.207800   91689 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:08:30.207808   91689 main.go:141] libmachine: (scheduled-stop-297595) DBG | Closing plugin on server side
	I0919 17:08:30.207811   91689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:08:30.207829   91689 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:08:30.209545   91689 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0919 17:08:30.210913   91689 addons.go:502] enable addons completed in 1.765475635s: enabled=[storage-provisioner default-storageclass]
	I0919 17:08:30.216081   91689 api_server.go:279] https://192.168.50.92:8443/healthz returned 200:
	ok
	I0919 17:08:30.217196   91689 api_server.go:141] control plane version: v1.28.2
	I0919 17:08:30.217206   91689 api_server.go:131] duration metric: took 10.35428ms to wait for apiserver health ...
	I0919 17:08:30.217212   91689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:08:30.225093   91689 system_pods.go:59] 5 kube-system pods found
	I0919 17:08:30.225106   91689 system_pods.go:61] "etcd-scheduled-stop-297595" [ceac2935-2bfb-48fa-b798-30d605d97118] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 17:08:30.225112   91689 system_pods.go:61] "kube-apiserver-scheduled-stop-297595" [3cb51132-f57a-4aab-b1fa-21238766ec47] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 17:08:30.225118   91689 system_pods.go:61] "kube-controller-manager-scheduled-stop-297595" [33c8fc3c-388c-4f1f-9b5e-2939384596c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 17:08:30.225124   91689 system_pods.go:61] "kube-scheduler-scheduled-stop-297595" [8d781473-9237-4a86-9882-bad92720c391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 17:08:30.225159   91689 system_pods.go:61] "storage-provisioner" [314389c8-78cb-4c84-a1ba-857d6f0077dd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0919 17:08:30.225163   91689 system_pods.go:74] duration metric: took 7.94753ms to wait for pod list to return data ...
	I0919 17:08:30.225168   91689 kubeadm.go:581] duration metric: took 1.746556857s to wait for : map[apiserver:true system_pods:true] ...
	I0919 17:08:30.225178   91689 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:08:30.228277   91689 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:08:30.228289   91689 node_conditions.go:123] node cpu capacity is 2
	I0919 17:08:30.228296   91689 node_conditions.go:105] duration metric: took 3.115572ms to run NodePressure ...
	I0919 17:08:30.228304   91689 start.go:228] waiting for startup goroutines ...
	I0919 17:08:30.228308   91689 start.go:233] waiting for cluster config update ...
	I0919 17:08:30.228315   91689 start.go:242] writing updated cluster config ...
	I0919 17:08:30.228517   91689 ssh_runner.go:195] Run: rm -f paused
	I0919 17:08:30.277539   91689 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0919 17:08:30.279601   91689 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-297595" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 17:07:51 UTC, ends at Tue 2023-09-19 17:08:31 UTC. --
	Sep 19 17:08:20 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:20.857178826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:08:20 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:20.857231275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:20 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:20.857254465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:08:20 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:20.857267536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:20 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:20.856672110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 cri-dockerd[1020]: time="2023-09-19T17:08:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b46c0d4a5a766600358fc0783a0e975e90962fd8a3f4312a56735c9b5b10b392/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.362214814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.362674252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.364379649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.364653105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 cri-dockerd[1020]: time="2023-09-19T17:08:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ebb3c20b5e046751a6aeca823ad8752fc0254240845f72ef275d1d10a41f7db9/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.493710819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.495937304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.496093170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.496238119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 cri-dockerd[1020]: time="2023-09-19T17:08:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c505122a81341cf1b2e188e16a245f7c66a8b0053ef7f58b78c7edabeb498359/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.813999028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.814073490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.814088387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:08:21 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:21.814100289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:21 scheduled-stop-297595 cri-dockerd[1020]: time="2023-09-19T17:08:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eeb7f7b46352ecabc470256fcd5ddf264ad3f9e75d42c222e43fa3415493218c/resolv.conf as [nameserver 192.168.122.1]"
	Sep 19 17:08:22 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:22.075159277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:08:22 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:22.075540922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:08:22 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:22.075612571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:08:22 scheduled-stop-297595 dockerd[1141]: time="2023-09-19T17:08:22.075624535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	49480363dfe61       73deb9a3f7025       10 seconds ago      Running             etcd                      0                   eeb7f7b46352e       etcd-scheduled-stop-297595
	f6bbcd5a75898       55f13c92defb1       10 seconds ago      Running             kube-controller-manager   0                   c505122a81341       kube-controller-manager-scheduled-stop-297595
	8b3bed38d84f0       cdcab12b2dd16       10 seconds ago      Running             kube-apiserver            0                   ebb3c20b5e046       kube-apiserver-scheduled-stop-297595
	e574ae8ae13f4       7a5d9d67a13f6       10 seconds ago      Running             kube-scheduler            0                   b46c0d4a5a766       kube-scheduler-scheduled-stop-297595
	
	* 
	* ==> describe nodes <==
	* Name:               scheduled-stop-297595
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=scheduled-stop-297595
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=scheduled-stop-297595
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_08_28_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:08:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-297595
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 19 Sep 2023 17:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:08:28 +0000   Tue, 19 Sep 2023 17:08:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:08:28 +0000   Tue, 19 Sep 2023 17:08:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:08:28 +0000   Tue, 19 Sep 2023 17:08:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 19 Sep 2023 17:08:28 +0000   Tue, 19 Sep 2023 17:08:23 +0000   KubeletNotReady              [container runtime status check may not have completed yet, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized]
	Addresses:
	  InternalIP:  192.168.50.92
	  Hostname:    scheduled-stop-297595
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9ebf9dba1dd4d338385b270da5a8603
	  System UUID:                b9ebf9db-a1dd-4d33-8385-b270da5a8603
	  Boot ID:                    28a3e67d-9001-4aa5-bccb-34b14b7a8f7c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-297595                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-scheduled-stop-297595             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-scheduled-stop-297595    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-scheduled-stop-297595             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (5%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node scheduled-stop-297595 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node scheduled-stop-297595 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node scheduled-stop-297595 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069833] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.301327] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.254729] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +0.142017] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.093398] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep19 17:08] systemd-fstab-generator[552]: Ignoring "noauto" for root device
	[  +0.099323] systemd-fstab-generator[563]: Ignoring "noauto" for root device
	[  +1.009177] systemd-fstab-generator[744]: Ignoring "noauto" for root device
	[  +0.275049] systemd-fstab-generator[782]: Ignoring "noauto" for root device
	[  +0.098881] systemd-fstab-generator[793]: Ignoring "noauto" for root device
	[  +0.114731] systemd-fstab-generator[806]: Ignoring "noauto" for root device
	[  +1.474767] systemd-fstab-generator[965]: Ignoring "noauto" for root device
	[  +0.100708] systemd-fstab-generator[976]: Ignoring "noauto" for root device
	[  +0.111570] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.105153] systemd-fstab-generator[998]: Ignoring "noauto" for root device
	[  +0.131571] systemd-fstab-generator[1012]: Ignoring "noauto" for root device
	[  +4.477062] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
	[  +2.943503] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.933811] systemd-fstab-generator[1512]: Ignoring "noauto" for root device
	[  +0.663586] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.598102] systemd-fstab-generator[2416]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [49480363dfe6] <==
	* {"level":"info","ts":"2023-09-19T17:08:22.49534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd switched to configuration voters=(6455554523072641725)"}
	{"level":"info","ts":"2023-09-19T17:08:22.495671Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7c852eece197fffe","local-member-id":"5996bca2bfe93abd","added-peer-id":"5996bca2bfe93abd","added-peer-peer-urls":["https://192.168.50.92:2380"]}
	{"level":"info","ts":"2023-09-19T17:08:22.500476Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-19T17:08:22.503176Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"5996bca2bfe93abd","initial-advertise-peer-urls":["https://192.168.50.92:2380"],"listen-peer-urls":["https://192.168.50.92:2380"],"advertise-client-urls":["https://192.168.50.92:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.92:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-19T17:08:22.503421Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-19T17:08:22.502691Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.92:2380"}
	{"level":"info","ts":"2023-09-19T17:08:22.504695Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.92:2380"}
	{"level":"info","ts":"2023-09-19T17:08:22.842258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-19T17:08:22.842319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-19T17:08:22.842346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd received MsgPreVoteResp from 5996bca2bfe93abd at term 1"}
	{"level":"info","ts":"2023-09-19T17:08:22.842358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd became candidate at term 2"}
	{"level":"info","ts":"2023-09-19T17:08:22.842363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd received MsgVoteResp from 5996bca2bfe93abd at term 2"}
	{"level":"info","ts":"2023-09-19T17:08:22.842404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5996bca2bfe93abd became leader at term 2"}
	{"level":"info","ts":"2023-09-19T17:08:22.842414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5996bca2bfe93abd elected leader 5996bca2bfe93abd at term 2"}
	{"level":"info","ts":"2023-09-19T17:08:22.846112Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:08:22.848847Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"5996bca2bfe93abd","local-member-attributes":"{Name:scheduled-stop-297595 ClientURLs:[https://192.168.50.92:2379]}","request-path":"/0/members/5996bca2bfe93abd/attributes","cluster-id":"7c852eece197fffe","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-19T17:08:22.849306Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:08:22.854912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-19T17:08:22.855215Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-19T17:08:22.855483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-19T17:08:22.855508Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7c852eece197fffe","local-member-id":"5996bca2bfe93abd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:08:22.856096Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:08:22.856402Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-19T17:08:22.856751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-19T17:08:22.86213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.92:2379"}
	
	* 
	* ==> kernel <==
	*  17:08:31 up 0 min,  0 users,  load average: 1.00, 0.24, 0.08
	Linux scheduled-stop-297595 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8b3bed38d84f] <==
	* I0919 17:08:24.866978       1 shared_informer.go:318] Caches are synced for configmaps
	I0919 17:08:24.867115       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0919 17:08:24.867120       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0919 17:08:24.867392       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:08:24.867408       1 aggregator.go:166] initial CRD sync complete...
	I0919 17:08:24.867412       1 autoregister_controller.go:141] Starting autoregister controller
	I0919 17:08:24.867416       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 17:08:24.867423       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:08:24.868296       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0919 17:08:24.876265       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:08:24.877362       1 controller.go:624] quota admission added evaluator for: namespaces
	I0919 17:08:25.047041       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 17:08:25.663955       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0919 17:08:25.671933       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0919 17:08:25.671973       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 17:08:26.319994       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:08:26.364942       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:08:26.491506       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0919 17:08:26.499066       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.50.92]
	I0919 17:08:26.500228       1 controller.go:624] quota admission added evaluator for: endpoints
	I0919 17:08:26.504901       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 17:08:26.740053       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0919 17:08:27.803848       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0919 17:08:27.821409       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 17:08:27.845203       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [f6bbcd5a7589] <==
	* I0919 17:08:29.891778       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0919 17:08:29.891791       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0919 17:08:29.891801       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0919 17:08:29.891819       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0919 17:08:29.891837       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0919 17:08:29.891856       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0919 17:08:29.891869       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	W0919 17:08:29.892091       1 shared_informer.go:593] resyncPeriod 15h17m23.628546131s is smaller than resyncCheckPeriod 20h22m1.520793788s and the informer has already started. Changing it to 20h22m1.520793788s
	I0919 17:08:29.892166       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0919 17:08:29.892182       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0919 17:08:29.892196       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0919 17:08:29.892207       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0919 17:08:29.892218       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0919 17:08:29.892231       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0919 17:08:29.892286       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0919 17:08:29.892348       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0919 17:08:29.892411       1 resource_quota_controller.go:295] "Starting resource quota controller"
	I0919 17:08:29.892426       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0919 17:08:29.892442       1 resource_quota_monitor.go:291] "QuotaMonitor running"
	I0919 17:08:30.035867       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0919 17:08:30.036021       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0919 17:08:30.036029       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0919 17:08:30.335445       1 controllermanager.go:642] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0919 17:08:30.335523       1 horizontal.go:200] "Starting HPA controller"
	I0919 17:08:30.335531       1 shared_informer.go:311] Waiting for caches to sync for HPA
	
	* 
	* ==> kube-scheduler [e574ae8ae13f] <==
	* W0919 17:08:24.842477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:08:24.843165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0919 17:08:24.845710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:08:24.845731       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:08:25.651187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 17:08:25.651214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 17:08:25.723929       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 17:08:25.724092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0919 17:08:25.729502       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 17:08:25.729793       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 17:08:25.749328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:08:25.749360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0919 17:08:25.810777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:08:25.810845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0919 17:08:25.837894       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:08:25.838002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0919 17:08:25.853031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:08:25.853088       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0919 17:08:25.866076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:08:25.866102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0919 17:08:25.974193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 17:08:25.974249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0919 17:08:26.067044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:08:26.067209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0919 17:08:27.723777       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:07:51 UTC, ends at Tue 2023-09-19 17:08:31 UTC. --
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.412764    2437 topology_manager.go:215] "Topology Admit Handler" podUID="2c76feb4b13e29a66c6b8d7258ffdda0" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.412909    2437 topology_manager.go:215] "Topology Admit Handler" podUID="85902227fd6ad20988b822069aabfd9c" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.414799    2437 topology_manager.go:215] "Topology Admit Handler" podUID="e4b80732c7e86ced4dc0eb72fe4a9069" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426045    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4b80732c7e86ced4dc0eb72fe4a9069-kubeconfig\") pod \"kube-scheduler-scheduled-stop-297595\" (UID: \"e4b80732c7e86ced4dc0eb72fe4a9069\") " pod="kube-system/kube-scheduler-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426113    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c76feb4b13e29a66c6b8d7258ffdda0-ca-certs\") pod \"kube-apiserver-scheduled-stop-297595\" (UID: \"2c76feb4b13e29a66c6b8d7258ffdda0\") " pod="kube-system/kube-apiserver-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426134    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c76feb4b13e29a66c6b8d7258ffdda0-k8s-certs\") pod \"kube-apiserver-scheduled-stop-297595\" (UID: \"2c76feb4b13e29a66c6b8d7258ffdda0\") " pod="kube-system/kube-apiserver-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426154    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c76feb4b13e29a66c6b8d7258ffdda0-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-297595\" (UID: \"2c76feb4b13e29a66c6b8d7258ffdda0\") " pod="kube-system/kube-apiserver-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426178    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85902227fd6ad20988b822069aabfd9c-ca-certs\") pod \"kube-controller-manager-scheduled-stop-297595\" (UID: \"85902227fd6ad20988b822069aabfd9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426197    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85902227fd6ad20988b822069aabfd9c-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-297595\" (UID: \"85902227fd6ad20988b822069aabfd9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426270    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85902227fd6ad20988b822069aabfd9c-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-297595\" (UID: \"85902227fd6ad20988b822069aabfd9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426293    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85902227fd6ad20988b822069aabfd9c-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-297595\" (UID: \"85902227fd6ad20988b822069aabfd9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426393    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85902227fd6ad20988b822069aabfd9c-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-297595\" (UID: \"85902227fd6ad20988b822069aabfd9c\") " pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426415    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/0ec06ac1120eeb3f823e902176de0603-etcd-certs\") pod \"etcd-scheduled-stop-297595\" (UID: \"0ec06ac1120eeb3f823e902176de0603\") " pod="kube-system/etcd-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.426493    2437 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/0ec06ac1120eeb3f823e902176de0603-etcd-data\") pod \"etcd-scheduled-stop-297595\" (UID: \"0ec06ac1120eeb3f823e902176de0603\") " pod="kube-system/etcd-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: E0919 17:08:28.440239    2437 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-scheduled-stop-297595\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: E0919 17:08:28.441057    2437 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-scheduled-stop-297595\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:28 scheduled-stop-297595 kubelet[2437]: I0919 17:08:28.961479    2437 apiserver.go:52] "Watching apiserver"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: I0919 17:08:29.013885    2437 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: E0919 17:08:29.283289    2437 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-scheduled-stop-297595\" already exists" pod="kube-system/etcd-scheduled-stop-297595"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: E0919 17:08:29.288647    2437 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-scheduled-stop-297595\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-297595"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: E0919 17:08:29.304459    2437 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-scheduled-stop-297595\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-297595"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: I0919 17:08:29.314544    2437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-297595" podStartSLOduration=1.314486979 podCreationTimestamp="2023-09-19 17:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:08:29.257725046 +0000 UTC m=+1.476696125" watchObservedRunningTime="2023-09-19 17:08:29.314486979 +0000 UTC m=+1.533458052"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: I0919 17:08:29.333787    2437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-297595" podStartSLOduration=2.33374874 podCreationTimestamp="2023-09-19 17:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:08:29.315894045 +0000 UTC m=+1.534865123" watchObservedRunningTime="2023-09-19 17:08:29.33374874 +0000 UTC m=+1.552719818"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: I0919 17:08:29.364819    2437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-297595" podStartSLOduration=3.364727775 podCreationTimestamp="2023-09-19 17:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:08:29.335670114 +0000 UTC m=+1.554641192" watchObservedRunningTime="2023-09-19 17:08:29.364727775 +0000 UTC m=+1.583698852"
	Sep 19 17:08:29 scheduled-stop-297595 kubelet[2437]: I0919 17:08:29.366126    2437 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-297595" podStartSLOduration=1.366048959 podCreationTimestamp="2023-09-19 17:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-19 17:08:29.364715046 +0000 UTC m=+1.583686124" watchObservedRunningTime="2023-09-19 17:08:29.366048959 +0000 UTC m=+1.585020038"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p scheduled-stop-297595 -n scheduled-stop-297595
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-297595 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-297595 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-297595 describe pod storage-provisioner: exit status 1 (70.468981ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-297595 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-297595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-297595
--- FAIL: TestScheduledStopUnix (53.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-367105 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-367105 "sudo crictl images -o json": exit status 1 (240.974881ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-367105 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367105 -n old-k8s-version-367105
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-367105 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-367105 logs -n 25: (1.55797275s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-325204 sudo                                 | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p kubenet-325204 sudo                                 | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p kubenet-325204 sudo                                 | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p kubenet-325204 sudo find                            | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p kubenet-325204 sudo crio                            | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p kubenet-325204                                      | kubenet-325204               | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	| delete  | -p                                                     | disable-driver-mounts-021123 | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:25 UTC |
	|         | disable-driver-mounts-021123                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-210669 | jenkins | v1.31.2 | 19 Sep 23 17:25 UTC | 19 Sep 23 17:27 UTC |
	|         | default-k8s-diff-port-210669                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-008214             | no-preload-008214            | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC | 19 Sep 23 17:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-008214                                   | no-preload-008214            | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC | 19 Sep 23 17:26 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-008214                  | no-preload-008214            | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC | 19 Sep 23 17:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-008214                                   | no-preload-008214            | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-201087            | embed-certs-201087           | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC | 19 Sep 23 17:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-201087                                  | embed-certs-201087           | jenkins | v1.31.2 | 19 Sep 23 17:26 UTC | 19 Sep 23 17:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367105        | old-k8s-version-367105       | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367105                              | old-k8s-version-367105       | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-201087                 | embed-certs-201087           | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-201087                                  | embed-certs-201087           | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367105             | old-k8s-version-367105       | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367105                              | old-k8s-version-367105       | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-210669  | default-k8s-diff-port-210669 | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-210669 | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | default-k8s-diff-port-210669                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-210669       | default-k8s-diff-port-210669 | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC | 19 Sep 23 17:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-210669 | jenkins | v1.31.2 | 19 Sep 23 17:27 UTC |                     |
	|         | default-k8s-diff-port-210669                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-367105 sudo                         | old-k8s-version-367105       | jenkins | v1.31.2 | 19 Sep 23 17:29 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 17:27:38
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 17:27:38.123377  118443 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:27:38.123520  118443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:27:38.123529  118443 out.go:309] Setting ErrFile to fd 2...
	I0919 17:27:38.123534  118443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:27:38.123724  118443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 17:27:38.124248  118443 out.go:303] Setting JSON to false
	I0919 17:27:38.125284  118443 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7571,"bootTime":1695136887,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 17:27:38.125351  118443 start.go:138] virtualization: kvm guest
	I0919 17:27:38.127749  118443 out.go:177] * [default-k8s-diff-port-210669] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 17:27:38.129693  118443 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 17:27:38.129666  118443 notify.go:220] Checking for updates...
	I0919 17:27:38.131295  118443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 17:27:38.132661  118443 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:27:38.134110  118443 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 17:27:38.135507  118443 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 17:27:38.137132  118443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 17:27:38.139187  118443 config.go:182] Loaded profile config "default-k8s-diff-port-210669": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:27:38.139804  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:38.139867  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:38.157438  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36325
	I0919 17:27:38.157925  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:38.158544  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:27:38.158576  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:38.159026  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:38.159257  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:27:38.159525  118443 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 17:27:38.159908  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:38.159955  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:38.180869  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0919 17:27:38.181310  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:38.181900  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:27:38.181924  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:38.182284  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:38.182522  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:27:38.220690  118443 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 17:27:38.222211  118443 start.go:298] selected driver: kvm2
	I0919 17:27:38.222225  118443 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-210669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-210669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.204 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:27:38.222345  118443 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 17:27:38.222989  118443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:27:38.223100  118443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-65689/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 17:27:38.238704  118443 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 17:27:38.239177  118443 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 17:27:38.239231  118443 cni.go:84] Creating CNI manager for ""
	I0919 17:27:38.239255  118443 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:27:38.239274  118443 start_flags.go:321] config:
	{Name:default-k8s-diff-port-210669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-21066
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.204 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:27:38.239506  118443 iso.go:125] acquiring lock: {Name:mkdf0d42546c83faf1a624ccdb8d9876db7a1a92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:27:38.241556  118443 out.go:177] * Starting control plane node default-k8s-diff-port-210669 in cluster default-k8s-diff-port-210669
	I0919 17:27:38.033712  117954 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 17:27:38.033749  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetIP
	I0919 17:27:38.036769  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:38.037196  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:38.037229  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:38.037457  117954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0919 17:27:38.042047  117954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:27:38.055292  117954 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 17:27:38.055395  117954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:27:38.080608  117954 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:27:38.080636  117954 docker.go:566] Images already preloaded, skipping extraction
	I0919 17:27:38.080698  117954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:27:38.113542  117954 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:27:38.113564  117954 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:27:38.113611  117954 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 17:27:38.148927  117954 cni.go:84] Creating CNI manager for ""
	I0919 17:27:38.148954  117954 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:27:38.148975  117954 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:27:38.148997  117954 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.129 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-201087 NodeName:embed-certs-201087 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:27:38.149182  117954 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-201087"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:27:38.149281  117954 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-201087 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-201087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:27:38.149349  117954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:27:38.168955  117954 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:27:38.169015  117954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:27:38.181337  117954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0919 17:27:38.201977  117954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:27:38.222051  117954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0919 17:27:38.247339  117954 ssh_runner.go:195] Run: grep 192.168.50.129	control-plane.minikube.internal$ /etc/hosts
	I0919 17:27:38.251288  117954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:27:38.265052  117954 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087 for IP: 192.168.50.129
	I0919 17:27:38.265094  117954 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:27:38.265287  117954 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 17:27:38.265356  117954 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 17:27:38.265437  117954 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/client.key
	I0919 17:27:38.265492  117954 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/apiserver.key.6330a28e
	I0919 17:27:38.265531  117954 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/proxy-client.key
	I0919 17:27:38.265655  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 17:27:38.265690  117954 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 17:27:38.265700  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:27:38.265721  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 17:27:38.265745  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:27:38.265776  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 17:27:38.265826  117954 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:27:38.267171  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:27:38.293934  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 17:27:38.318364  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:27:38.340925  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/embed-certs-201087/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:27:38.363239  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:27:38.387887  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 17:27:35.919970  118166 main.go:141] libmachine: (old-k8s-version-367105) Waiting to get IP...
	I0919 17:27:35.921102  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:35.921706  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:35.921806  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:35.921698  118330 retry.go:31] will retry after 284.786581ms: waiting for machine to come up
	I0919 17:27:36.208432  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:36.209092  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:36.209127  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:36.209035  118330 retry.go:31] will retry after 344.098897ms: waiting for machine to come up
	I0919 17:27:36.554679  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:36.555509  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:36.555534  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:36.555454  118330 retry.go:31] will retry after 336.118647ms: waiting for machine to come up
	I0919 17:27:36.893226  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:36.894005  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:36.894032  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:36.893951  118330 retry.go:31] will retry after 396.562497ms: waiting for machine to come up
	I0919 17:27:37.292553  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:37.293095  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:37.293125  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:37.293043  118330 retry.go:31] will retry after 493.94906ms: waiting for machine to come up
	I0919 17:27:37.788905  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:37.789694  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:37.789725  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:37.789608  118330 retry.go:31] will retry after 899.085614ms: waiting for machine to come up
	I0919 17:27:38.691016  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:38.691658  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:38.691685  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:38.691597  118330 retry.go:31] will retry after 1.010411957s: waiting for machine to come up
	I0919 17:27:39.703018  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:39.703599  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:39.703627  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:39.703547  118330 retry.go:31] will retry after 1.293167436s: waiting for machine to come up
	I0919 17:27:36.842320  117576 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:27:36.851096  117576 kubeadm.go:787] kubelet initialised
	I0919 17:27:36.851180  117576 kubeadm.go:788] duration metric: took 8.828213ms waiting for restarted kubelet to initialise ...
	I0919 17:27:36.851206  117576 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:27:36.861161  117576 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-dwhhg" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:36.871942  117576 pod_ready.go:97] node "no-preload-008214" hosting pod "coredns-5dd5756b68-dwhhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.871984  117576 pod_ready.go:81] duration metric: took 10.741919ms waiting for pod "coredns-5dd5756b68-dwhhg" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:36.871997  117576 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-008214" hosting pod "coredns-5dd5756b68-dwhhg" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.872008  117576 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:36.880961  117576 pod_ready.go:97] node "no-preload-008214" hosting pod "etcd-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.880992  117576 pod_ready.go:81] duration metric: took 8.974021ms waiting for pod "etcd-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:36.881003  117576 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-008214" hosting pod "etcd-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.881015  117576 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:36.890871  117576 pod_ready.go:97] node "no-preload-008214" hosting pod "kube-apiserver-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.890902  117576 pod_ready.go:81] duration metric: took 9.875033ms waiting for pod "kube-apiserver-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:36.890915  117576 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-008214" hosting pod "kube-apiserver-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.890925  117576 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:36.899764  117576 pod_ready.go:97] node "no-preload-008214" hosting pod "kube-controller-manager-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.899839  117576 pod_ready.go:81] duration metric: took 8.901731ms waiting for pod "kube-controller-manager-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:36.899865  117576 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-008214" hosting pod "kube-controller-manager-no-preload-008214" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-008214" has status "Ready":"False"
	I0919 17:27:36.899884  117576 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m954q" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:37.247795  117576 pod_ready.go:92] pod "kube-proxy-m954q" in "kube-system" namespace has status "Ready":"True"
	I0919 17:27:37.247819  117576 pod_ready.go:81] duration metric: took 347.91764ms waiting for pod "kube-proxy-m954q" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:37.247829  117576 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:39.561864  117576 pod_ready.go:102] pod "kube-scheduler-no-preload-008214" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:38.243001  118443 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 17:27:38.243034  118443 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0919 17:27:38.243052  118443 cache.go:57] Caching tarball of preloaded images
	I0919 17:27:38.243164  118443 preload.go:174] Found /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0919 17:27:38.243176  118443 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0919 17:27:38.243273  118443 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/config.json ...
	I0919 17:27:38.243453  118443 start.go:365] acquiring machines lock for default-k8s-diff-port-210669: {Name:mk203c3120e1410acfaa868a5fe996910aac1894 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 17:27:38.415099  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:27:38.438391  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 17:27:38.461083  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:27:38.485097  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 17:27:38.508978  117954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 17:27:38.533086  117954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:27:38.551159  117954 ssh_runner.go:195] Run: openssl version
	I0919 17:27:38.557302  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:27:38.568088  117954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:27:38.573234  117954 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:27:38.573284  117954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:27:38.579035  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:27:38.589789  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 17:27:38.601089  117954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 17:27:38.607338  117954 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 17:27:38.607405  117954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 17:27:38.615083  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 17:27:38.627184  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 17:27:38.641077  117954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 17:27:38.647249  117954 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 17:27:38.647310  117954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 17:27:38.654138  117954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:27:38.667881  117954 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:27:38.674110  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:27:38.682400  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:27:38.690273  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:27:38.698221  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:27:38.706342  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:27:38.714294  117954 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:27:38.721924  117954 kubeadm.go:404] StartCluster: {Name:embed-certs-201087 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.2 ClusterName:embed-certs-201087 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:27:38.722055  117954 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:27:38.748856  117954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:27:38.762770  117954 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:27:38.762796  117954 kubeadm.go:636] restartCluster start
	I0919 17:27:38.762854  117954 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:27:38.775458  117954 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:38.776191  117954 kubeconfig.go:135] verify returned: extract IP: "embed-certs-201087" does not appear in /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:27:38.776456  117954 kubeconfig.go:146] "embed-certs-201087" context is missing from /home/jenkins/minikube-integration/17240-65689/kubeconfig - will repair!
	I0919 17:27:38.776926  117954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:27:38.778598  117954 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:27:38.791106  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:38.791197  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:38.804973  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:38.805005  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:38.805058  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:38.818212  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:39.318932  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:39.319056  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:39.335780  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:39.819366  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:39.819472  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:39.835572  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:40.319090  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:40.319182  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:40.332116  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:40.818376  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:40.818475  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:40.831490  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:41.318812  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:41.318913  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:41.332216  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:41.818823  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:41.818927  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:41.835907  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:42.318398  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:42.318501  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:42.331236  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:42.818560  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:42.818650  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:42.832775  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:43.319262  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:43.319374  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:43.336610  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:40.997904  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:40.998508  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:40.998538  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:40.998463  118330 retry.go:31] will retry after 1.12458237s: waiting for machine to come up
	I0919 17:27:42.125246  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:42.125774  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:42.125808  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:42.125750  118330 retry.go:31] will retry after 2.178017706s: waiting for machine to come up
	I0919 17:27:44.306285  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:44.306776  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:44.306810  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:44.306716  118330 retry.go:31] will retry after 2.039781215s: waiting for machine to come up
	I0919 17:27:42.056177  117576 pod_ready.go:102] pod "kube-scheduler-no-preload-008214" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:43.558551  117576 pod_ready.go:92] pod "kube-scheduler-no-preload-008214" in "kube-system" namespace has status "Ready":"True"
	I0919 17:27:43.558582  117576 pod_ready.go:81] duration metric: took 6.310745897s waiting for pod "kube-scheduler-no-preload-008214" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:43.558595  117576 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:45.578712  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:43.819017  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:43.819105  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:43.831764  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:44.319331  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:44.319421  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:44.332206  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:44.818740  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:44.818834  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:44.836345  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:45.319033  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:45.319115  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:45.334695  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:45.819324  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:45.819414  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:45.836677  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:46.319265  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:46.319360  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:46.332389  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:46.818758  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:46.818848  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:46.834189  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:47.318375  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:47.318474  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:47.335573  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:47.818778  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:47.818860  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:47.831230  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:48.318764  117954 api_server.go:166] Checking apiserver status ...
	I0919 17:27:48.318876  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:27:48.332688  117954 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:27:46.349286  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:46.349783  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:46.349813  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:46.349724  118330 retry.go:31] will retry after 2.451479743s: waiting for machine to come up
	I0919 17:27:48.804233  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:48.804816  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:48.804850  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:48.804743  118330 retry.go:31] will retry after 3.236138896s: waiting for machine to come up
	I0919 17:27:47.579797  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:49.582470  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:48.791698  117954 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:27:48.791736  117954 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:27:48.791821  117954 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:27:48.813823  117954 docker.go:462] Stopping containers: [06aa6e81c067 39cdd3e07246 7098394bd429 57909d699934 81aa5dac1bc0 940074cc504d 8c38387f7fa2 71f51df34905 e6961a3b9f3c a669819e8474 a53ab4039a89 2b3d2a9ff474 8926108810c4 a38f3f89e6a9 bddec32258a8]
	I0919 17:27:48.813911  117954 ssh_runner.go:195] Run: docker stop 06aa6e81c067 39cdd3e07246 7098394bd429 57909d699934 81aa5dac1bc0 940074cc504d 8c38387f7fa2 71f51df34905 e6961a3b9f3c a669819e8474 a53ab4039a89 2b3d2a9ff474 8926108810c4 a38f3f89e6a9 bddec32258a8
	I0919 17:27:48.837109  117954 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:27:48.854727  117954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:27:48.866143  117954 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:27:48.866212  117954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:27:48.876719  117954 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:27:48.876744  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:49.008600  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:49.900795  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:50.099276  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:50.205506  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:50.309676  117954 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:27:50.309761  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:50.326878  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:50.837833  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:51.337431  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:51.837225  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:52.338095  117954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:27:52.445272  117954 api_server.go:72] duration metric: took 2.135597888s to wait for apiserver process to appear ...
	I0919 17:27:52.445303  117954 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:27:52.445324  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:52.446285  117954 api_server.go:269] stopped: https://192.168.50.129:8443/healthz: Get "https://192.168.50.129:8443/healthz": dial tcp 192.168.50.129:8443: connect: connection refused
	I0919 17:27:52.446330  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:52.446822  117954 api_server.go:269] stopped: https://192.168.50.129:8443/healthz: Get "https://192.168.50.129:8443/healthz": dial tcp 192.168.50.129:8443: connect: connection refused
	I0919 17:27:52.947550  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:52.042790  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:52.043484  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | unable to find current IP address of domain old-k8s-version-367105 in network mk-old-k8s-version-367105
	I0919 17:27:52.043518  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | I0919 17:27:52.043410  118330 retry.go:31] will retry after 5.514282668s: waiting for machine to come up
	I0919 17:27:52.079034  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:54.080729  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:56.082604  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:55.839413  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:27:55.839447  117954 api_server.go:103] status: https://192.168.50.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:27:55.839459  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:55.922604  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:27:55.922642  117954 api_server.go:103] status: https://192.168.50.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:27:55.947904  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:55.976745  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:27:55.976777  117954 api_server.go:103] status: https://192.168.50.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:27:56.447362  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:56.454470  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:27:56.454500  117954 api_server.go:103] status: https://192.168.50.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:27:56.947062  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:56.954403  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:27:56.954435  117954 api_server.go:103] status: https://192.168.50.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:27:57.446967  117954 api_server.go:253] Checking apiserver healthz at https://192.168.50.129:8443/healthz ...
	I0919 17:27:57.452862  117954 api_server.go:279] https://192.168.50.129:8443/healthz returned 200:
	ok
	I0919 17:27:57.460600  117954 api_server.go:141] control plane version: v1.28.2
	I0919 17:27:57.460623  117954 api_server.go:131] duration metric: took 5.01531237s to wait for apiserver health ...
	I0919 17:27:57.460632  117954 cni.go:84] Creating CNI manager for ""
	I0919 17:27:57.460648  117954 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:27:57.462728  117954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:27:57.464255  117954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:27:57.477236  117954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:27:57.506946  117954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:27:57.517660  117954 system_pods.go:59] 8 kube-system pods found
	I0919 17:27:57.517693  117954 system_pods.go:61] "coredns-5dd5756b68-v6ghh" [aae4424e-ee7d-4434-9ab3-6f813409048f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:27:57.517706  117954 system_pods.go:61] "etcd-embed-certs-201087" [315d3f24-6d0e-49fb-80c9-f39cdd78f739] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 17:27:57.517718  117954 system_pods.go:61] "kube-apiserver-embed-certs-201087" [4a0b3673-ca47-4d9b-9ce4-66808cb4ca8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 17:27:57.517731  117954 system_pods.go:61] "kube-controller-manager-embed-certs-201087" [df824a5a-5342-46cb-afc0-bfc62c6e1e52] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 17:27:57.517742  117954 system_pods.go:61] "kube-proxy-7skcp" [954b6ba4-e52b-4532-ab7e-76063ad14efd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 17:27:57.517749  117954 system_pods.go:61] "kube-scheduler-embed-certs-201087" [13b42ebc-d329-49f1-829a-7cf2b9e75c3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 17:27:57.517757  117954 system_pods.go:61] "metrics-server-57f55c9bc5-rnjvj" [bedb9b71-7974-48ea-93cb-c3dad12ab821] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:27:57.517765  117954 system_pods.go:61] "storage-provisioner" [a43eef08-c248-44fd-9f1d-acc80da15f82] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:27:57.517775  117954 system_pods.go:74] duration metric: took 10.804162ms to wait for pod list to return data ...
	I0919 17:27:57.517790  117954 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:27:57.521469  117954 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:27:57.521499  117954 node_conditions.go:123] node cpu capacity is 2
	I0919 17:27:57.521513  117954 node_conditions.go:105] duration metric: took 3.713086ms to run NodePressure ...
	I0919 17:27:57.521534  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:27:58.014741  117954 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:27:58.025736  117954 kubeadm.go:787] kubelet initialised
	I0919 17:27:58.025763  117954 kubeadm.go:788] duration metric: took 10.996482ms waiting for restarted kubelet to initialise ...
	I0919 17:27:58.025774  117954 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:27:58.037990  117954 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.046878  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.046906  117954 pod_ready.go:81] duration metric: took 8.887717ms waiting for pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.046918  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.046939  117954 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.062419  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "etcd-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.062453  117954 pod_ready.go:81] duration metric: took 15.499975ms waiting for pod "etcd-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.062465  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "etcd-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.062474  117954 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.073561  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.073595  117954 pod_ready.go:81] duration metric: took 11.111725ms waiting for pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.073609  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.073635  117954 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.087934  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.087967  117954 pod_ready.go:81] duration metric: took 14.32107ms waiting for pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.087980  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.087989  117954 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7skcp" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.420784  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "kube-proxy-7skcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.420811  117954 pod_ready.go:81] duration metric: took 332.813758ms waiting for pod "kube-proxy-7skcp" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.420824  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "kube-proxy-7skcp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.420833  117954 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:58.819690  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.819718  117954 pod_ready.go:81] duration metric: took 398.876725ms waiting for pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:58.819743  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:58.819755  117954 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace to be "Ready" ...
	I0919 17:27:59.219515  117954 pod_ready.go:97] node "embed-certs-201087" hosting pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:59.219542  117954 pod_ready.go:81] duration metric: took 399.77348ms waiting for pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace to be "Ready" ...
	E0919 17:27:59.219552  117954 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-201087" hosting pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-201087" has status "Ready":"False"
	I0919 17:27:59.219563  117954 pod_ready.go:38] duration metric: took 1.193776128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:27:59.219579  117954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:27:59.231588  117954 ops.go:34] apiserver oom_adj: -16
	I0919 17:27:59.231621  117954 kubeadm.go:640] restartCluster took 20.468816809s
	I0919 17:27:59.231630  117954 kubeadm.go:406] StartCluster complete in 20.509716103s
	I0919 17:27:59.231652  117954 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:27:59.231730  117954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:27:59.233262  117954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:27:59.233516  117954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:27:59.233631  117954 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:27:59.233700  117954 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-201087"
	I0919 17:27:59.233722  117954 addons.go:69] Setting default-storageclass=true in profile "embed-certs-201087"
	I0919 17:27:59.233740  117954 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-201087"
	W0919 17:27:59.233748  117954 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:27:59.233752  117954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-201087"
	I0919 17:27:59.233777  117954 config.go:182] Loaded profile config "embed-certs-201087": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:27:59.233794  117954 host.go:66] Checking if "embed-certs-201087" exists ...
	I0919 17:27:59.233824  117954 addons.go:69] Setting metrics-server=true in profile "embed-certs-201087"
	I0919 17:27:59.233858  117954 addons.go:231] Setting addon metrics-server=true in "embed-certs-201087"
	W0919 17:27:59.233870  117954 addons.go:240] addon metrics-server should already be in state true
	I0919 17:27:59.233848  117954 cache.go:107] acquiring lock: {Name:mk39dabf87437641a7731807e46502447a060f17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:27:59.233928  117954 host.go:66] Checking if "embed-certs-201087" exists ...
	I0919 17:27:59.233930  117954 cache.go:115] /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0919 17:27:59.233941  117954 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 100.657µs
	I0919 17:27:59.233952  117954 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0919 17:27:59.233960  117954 cache.go:87] Successfully saved all images to host disk.
	I0919 17:27:59.234152  117954 config.go:182] Loaded profile config "embed-certs-201087": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:27:59.234172  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.234184  117954 addons.go:69] Setting dashboard=true in profile "embed-certs-201087"
	I0919 17:27:59.234197  117954 addons.go:231] Setting addon dashboard=true in "embed-certs-201087"
	I0919 17:27:59.234198  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0919 17:27:59.234204  117954 addons.go:240] addon dashboard should already be in state true
	I0919 17:27:59.234234  117954 host.go:66] Checking if "embed-certs-201087" exists ...
	I0919 17:27:59.234255  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.234172  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.234281  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.234377  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.234495  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.234529  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.234536  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.234563  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.257519  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0919 17:27:59.257536  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0919 17:27:59.257545  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39267
	I0919 17:27:59.257519  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38949
	I0919 17:27:59.257580  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0919 17:27:59.257940  117954 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-201087" context rescaled to 1 replicas
	I0919 17:27:59.257974  117954 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 17:27:59.257990  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.260127  117954 out.go:177] * Verifying Kubernetes components...
	I0919 17:27:59.258084  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.258112  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.258616  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.258685  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.259046  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.261660  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.261712  117954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:27:59.262169  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.262180  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.262216  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.262233  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.263411  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.263445  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.263450  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.263537  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.263570  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.268941  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.269112  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.269197  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.269895  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.269204  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.269949  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.270052  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.270090  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.270289  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.270754  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.270782  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.270820  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.286707  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I0919 17:27:59.290013  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.290503  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.290521  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.290876  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.291087  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.293008  117954 main.go:141] libmachine: (embed-certs-201087) Calling .DriverName
	I0919 17:27:59.297310  117954 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:27:59.298023  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.299611  117954 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:27:59.299627  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:27:59.299639  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.299648  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHHostname
	I0919 17:27:59.298855  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
	I0919 17:27:59.303081  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.303786  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:59.303816  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.303830  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHPort
	I0919 17:27:59.304081  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.304559  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.304580  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.304839  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHKeyPath
	I0919 17:27:59.304875  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.305132  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.305370  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHUsername
	I0919 17:27:59.305535  117954 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/embed-certs-201087/id_rsa Username:docker}
	I0919 17:27:59.306809  117954 main.go:141] libmachine: (embed-certs-201087) Calling .DriverName
	I0919 17:27:59.308831  117954 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0919 17:27:59.310607  117954 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 17:27:59.312116  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 17:27:59.310435  117954 addons.go:231] Setting addon default-storageclass=true in "embed-certs-201087"
	W0919 17:27:59.312149  117954 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:27:59.312183  117954 host.go:66] Checking if "embed-certs-201087" exists ...
	I0919 17:27:59.312133  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 17:27:59.312265  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHHostname
	I0919 17:27:59.312563  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.312614  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.315571  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.316073  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:59.316098  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.317010  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHPort
	I0919 17:27:59.317166  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHKeyPath
	I0919 17:27:59.317281  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHUsername
	I0919 17:27:59.317378  117954 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/embed-certs-201087/id_rsa Username:docker}
	I0919 17:27:59.320924  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0919 17:27:59.321435  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.321937  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.321956  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.322313  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.322504  117954 main.go:141] libmachine: (embed-certs-201087) Calling .DriverName
	I0919 17:27:59.322680  117954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:27:59.322703  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHHostname
	I0919 17:27:59.323870  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0919 17:27:59.324376  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.324895  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.324916  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.325286  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.325483  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.325792  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.326440  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:59.326466  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.326643  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHPort
	I0919 17:27:59.326848  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHKeyPath
	I0919 17:27:59.326999  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHUsername
	I0919 17:27:59.327128  117954 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/embed-certs-201087/id_rsa Username:docker}
	I0919 17:27:59.327703  117954 main.go:141] libmachine: (embed-certs-201087) Calling .DriverName
	I0919 17:27:59.330128  117954 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:27:59.799065  118443 start.go:369] acquired machines lock for "default-k8s-diff-port-210669" in 21.555577609s
	I0919 17:27:59.799138  118443 start.go:96] Skipping create...Using existing machine configuration
	I0919 17:27:59.799147  118443 fix.go:54] fixHost starting: 
	I0919 17:27:59.799582  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.799640  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.817864  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0919 17:27:59.818161  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.818604  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.818621  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.818903  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.819061  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:27:59.819206  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:27:59.820794  118443 fix.go:102] recreateIfNeeded on default-k8s-diff-port-210669: state=Stopped err=<nil>
	I0919 17:27:59.820833  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	W0919 17:27:59.820990  118443 fix.go:128] unexpected machine state, will restart: <nil>
	I0919 17:27:59.822937  118443 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-210669" ...
	I0919 17:27:57.558970  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.559718  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has current primary IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.559740  118166 main.go:141] libmachine: (old-k8s-version-367105) Found IP for machine: 192.168.83.162
	I0919 17:27:57.559755  118166 main.go:141] libmachine: (old-k8s-version-367105) Reserving static IP address...
	I0919 17:27:57.560291  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "old-k8s-version-367105", mac: "52:54:00:5d:9c:55", ip: "192.168.83.162"} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.560327  118166 main.go:141] libmachine: (old-k8s-version-367105) Reserved static IP address: 192.168.83.162
	I0919 17:27:57.560347  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | skip adding static IP to network mk-old-k8s-version-367105 - found existing host DHCP lease matching {name: "old-k8s-version-367105", mac: "52:54:00:5d:9c:55", ip: "192.168.83.162"}
	I0919 17:27:57.560367  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Getting to WaitForSSH function...
	I0919 17:27:57.560384  118166 main.go:141] libmachine: (old-k8s-version-367105) Waiting for SSH to be available...
	I0919 17:27:57.562834  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.563239  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.563280  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.563519  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Using SSH client type: external
	I0919 17:27:57.563552  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa (-rw-------)
	I0919 17:27:57.563601  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:27:57.563622  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | About to run SSH command:
	I0919 17:27:57.563638  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | exit 0
	I0919 17:27:57.667156  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | SSH cmd err, output: <nil>: 
	I0919 17:27:57.667587  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetConfigRaw
	I0919 17:27:57.668416  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetIP
	I0919 17:27:57.671673  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.672064  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.672098  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.672467  118166 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/config.json ...
	I0919 17:27:57.672724  118166 machine.go:88] provisioning docker machine ...
	I0919 17:27:57.672752  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:57.672995  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetMachineName
	I0919 17:27:57.673204  118166 buildroot.go:166] provisioning hostname "old-k8s-version-367105"
	I0919 17:27:57.673229  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetMachineName
	I0919 17:27:57.673450  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:57.676187  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.676591  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.676634  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.676774  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:57.676958  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:57.677152  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:57.677336  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:57.677534  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:57.678113  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:57.678139  118166 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367105 && echo "old-k8s-version-367105" | sudo tee /etc/hostname
	I0919 17:27:57.819118  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367105
	
	I0919 17:27:57.819154  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:57.822172  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.822549  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.822584  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.822769  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:57.823049  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:57.823257  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:57.823453  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:57.823744  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:57.824250  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:57.824282  118166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367105/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:27:57.952140  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:27:57.952173  118166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 17:27:57.952200  118166 buildroot.go:174] setting up certificates
	I0919 17:27:57.952235  118166 provision.go:83] configureAuth start
	I0919 17:27:57.952253  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetMachineName
	I0919 17:27:57.952566  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetIP
	I0919 17:27:57.955655  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.956086  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.956151  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.956318  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:57.958897  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.959306  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:57.959342  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:57.959495  118166 provision.go:138] copyHostCerts
	I0919 17:27:57.959555  118166 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 17:27:57.959567  118166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 17:27:57.959635  118166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 17:27:57.959766  118166 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 17:27:57.959778  118166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 17:27:57.959815  118166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 17:27:57.959918  118166 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 17:27:57.959931  118166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 17:27:57.959960  118166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 17:27:57.960025  118166 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367105 san=[192.168.83.162 192.168.83.162 localhost 127.0.0.1 minikube old-k8s-version-367105]
	I0919 17:27:58.104224  118166 provision.go:172] copyRemoteCerts
	I0919 17:27:58.104291  118166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:27:58.104317  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:58.107478  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.107882  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:58.107919  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.108117  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:58.108359  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.108550  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:58.108696  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:27:58.202731  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 17:27:58.226797  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0919 17:27:58.251030  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 17:27:58.274591  118166 provision.go:86] duration metric: configureAuth took 322.336035ms
	I0919 17:27:58.274623  118166 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:27:58.274850  118166 config.go:182] Loaded profile config "old-k8s-version-367105": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 17:27:58.274881  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:58.275159  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:58.278175  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.278579  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:58.278612  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.278789  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:58.279000  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.279175  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.279356  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:58.279535  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:58.279850  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:58.279863  118166 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 17:27:58.399751  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 17:27:58.399780  118166 buildroot.go:70] root file system type: tmpfs
	I0919 17:27:58.399905  118166 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 17:27:58.399939  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:58.403009  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.403355  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:58.403387  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.403568  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:58.403764  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.403952  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.404140  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:58.404335  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:58.404711  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:58.404787  118166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 17:27:58.537262  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 17:27:58.537319  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:58.539963  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.540410  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:58.540445  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:58.540612  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:58.540812  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.540964  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:58.541078  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:58.541328  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:58.541681  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:58.541701  118166 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 17:27:59.519160  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 17:27:59.519191  118166 machine.go:91] provisioned docker machine in 1.846447345s
	I0919 17:27:59.519206  118166 start.go:300] post-start starting for "old-k8s-version-367105" (driver="kvm2")
	I0919 17:27:59.519220  118166 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:27:59.519241  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:59.519603  118166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:27:59.519646  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:59.522836  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.523291  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:59.523323  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.523521  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:59.523721  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:59.523880  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:59.524052  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:27:59.620190  118166 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:27:59.626040  118166 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:27:59.626067  118166 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 17:27:59.626143  118166 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 17:27:59.626257  118166 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 17:27:59.626372  118166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:27:59.638110  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:27:59.667082  118166 start.go:303] post-start completed in 147.856596ms
	I0919 17:27:59.667132  118166 fix.go:56] fixHost completed within 25.276304623s
	I0919 17:27:59.667157  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:59.670118  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.670547  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:59.670582  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.670792  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:59.671011  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:59.671190  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:59.671352  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:59.671534  118166 main.go:141] libmachine: Using SSH client type: native
	I0919 17:27:59.671905  118166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.162 22 <nil> <nil>}
	I0919 17:27:59.671919  118166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:27:59.798927  118166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144479.741862058
	
	I0919 17:27:59.798947  118166 fix.go:206] guest clock: 1695144479.741862058
	I0919 17:27:59.798958  118166 fix.go:219] Guest: 2023-09-19 17:27:59.741862058 +0000 UTC Remote: 2023-09-19 17:27:59.667136269 +0000 UTC m=+44.038918164 (delta=74.725789ms)
	I0919 17:27:59.798983  118166 fix.go:190] guest clock delta is within tolerance: 74.725789ms
	I0919 17:27:59.798989  118166 start.go:83] releasing machines lock for "old-k8s-version-367105", held for 25.408198341s
	I0919 17:27:59.799022  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:59.799331  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetIP
	I0919 17:27:59.803030  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.803472  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:59.803506  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.803769  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:59.804457  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:59.804643  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:27:59.804723  118166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:27:59.804779  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:59.805018  118166 ssh_runner.go:195] Run: cat /version.json
	I0919 17:27:59.805056  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:27:59.808493  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.808601  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.808889  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:59.808916  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.808990  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:27:59.809009  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:27:59.809172  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:59.809222  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:27:59.809378  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:59.809408  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:27:59.809556  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:59.809600  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:27:59.809780  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:27:59.810366  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:27:59.921232  118166 ssh_runner.go:195] Run: systemctl --version
	I0919 17:27:59.929009  118166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:27:59.935949  118166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:27:59.936025  118166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0919 17:27:59.949022  118166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0919 17:27:59.971761  118166 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:27:59.971796  118166 start.go:469] detecting cgroup driver to use...
	I0919 17:27:59.971969  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:27:59.997304  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0919 17:28:00.009700  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 17:28:00.022106  118166 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 17:28:00.022179  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 17:28:00.035744  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:28:00.048034  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 17:28:00.058775  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:28:00.073312  118166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:28:00.087694  118166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 17:28:00.101779  118166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:28:00.112676  118166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:28:00.124520  118166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:00.248223  118166 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 17:28:00.267209  118166 start.go:469] detecting cgroup driver to use...
	I0919 17:28:00.267336  118166 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 17:28:00.296705  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:28:00.313756  118166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:28:00.341121  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:28:00.358141  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:28:00.374458  118166 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 17:28:00.411075  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:28:00.427487  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:28:00.447485  118166 ssh_runner.go:195] Run: which cri-dockerd
	I0919 17:28:00.452119  118166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 17:28:00.461769  118166 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 17:28:00.480458  118166 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 17:28:00.618860  118166 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 17:27:58.582437  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:00.584370  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:27:59.331872  117954 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:27:59.331892  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:27:59.330378  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36721
	I0919 17:27:59.331912  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHHostname
	I0919 17:27:59.332284  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.332821  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.332847  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.333191  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.333733  117954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:27:59.333778  117954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:27:59.334984  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.335338  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:59.335362  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.335525  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHPort
	I0919 17:27:59.335685  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHKeyPath
	I0919 17:27:59.335875  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHUsername
	I0919 17:27:59.335960  117954 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/embed-certs-201087/id_rsa Username:docker}
	I0919 17:27:59.354693  117954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0919 17:27:59.355091  117954 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:27:59.355536  117954 main.go:141] libmachine: Using API Version  1
	I0919 17:27:59.355555  117954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:27:59.356061  117954 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:27:59.356212  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetState
	I0919 17:27:59.361277  117954 main.go:141] libmachine: (embed-certs-201087) Calling .DriverName
	I0919 17:27:59.361569  117954 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:27:59.361592  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:27:59.361637  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHHostname
	I0919 17:27:59.364378  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.364796  117954 main.go:141] libmachine: (embed-certs-201087) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:0c:4a", ip: ""} in network mk-embed-certs-201087: {Iface:virbr2 ExpiryTime:2023-09-19 18:27:26 +0000 UTC Type:0 Mac:52:54:00:0a:0c:4a Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:embed-certs-201087 Clientid:01:52:54:00:0a:0c:4a}
	I0919 17:27:59.364829  117954 main.go:141] libmachine: (embed-certs-201087) DBG | domain embed-certs-201087 has defined IP address 192.168.50.129 and MAC address 52:54:00:0a:0c:4a in network mk-embed-certs-201087
	I0919 17:27:59.365396  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHPort
	I0919 17:27:59.365611  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHKeyPath
	I0919 17:27:59.365780  117954 main.go:141] libmachine: (embed-certs-201087) Calling .GetSSHUsername
	I0919 17:27:59.365941  117954 sshutil.go:53] new ssh client: &{IP:192.168.50.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/embed-certs-201087/id_rsa Username:docker}
	I0919 17:27:59.434633  117954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:27:59.479729  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 17:27:59.479754  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 17:27:59.522684  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 17:27:59.522713  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 17:27:59.557205  117954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:27:59.558397  117954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:27:59.558416  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:27:59.628854  117954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:27:59.628879  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:27:59.677259  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 17:27:59.677288  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 17:27:59.696407  117954 node_ready.go:35] waiting up to 6m0s for node "embed-certs-201087" to be "Ready" ...
	I0919 17:27:59.696554  117954 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:27:59.696600  117954 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:27:59.696632  117954 cache_images.go:262] succeeded pushing to: embed-certs-201087
	I0919 17:27:59.696646  117954 cache_images.go:263] failed pushing to: 
	I0919 17:27:59.696701  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:27:59.696718  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:27:59.697029  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:27:59.697049  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:27:59.697060  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:27:59.697069  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:27:59.697467  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:27:59.697496  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:27:59.697513  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:27:59.698232  117954 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:27:59.782032  117954 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:27:59.782065  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:27:59.784377  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 17:27:59.784397  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 17:27:59.816401  117954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:27:59.833920  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 17:27:59.833942  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 17:27:59.894402  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 17:27:59.894431  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 17:28:00.030721  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 17:28:00.030747  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 17:28:00.102375  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 17:28:00.102397  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 17:28:00.132077  117954 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:00.132111  117954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 17:28:00.158344  117954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:01.575984  117954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.018735199s)
	I0919 17:28:01.576070  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.576092  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.576200  117954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.759758788s)
	I0919 17:28:01.576351  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.576443  117954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.141776403s)
	I0919 17:28:01.576478  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.576491  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.576500  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.576513  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.576520  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.576528  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.576451  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.576911  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.576932  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.576941  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.576950  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.576959  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.576968  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.576971  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.576981  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.576993  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.577002  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.577187  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.577211  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.577220  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.579108  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.579121  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.579135  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.579214  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.579550  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.579568  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.579582  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.579603  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:01.579614  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:01.579849  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:01.579864  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:01.579873  117954 addons.go:467] Verifying addon metrics-server=true in "embed-certs-201087"
	I0919 17:28:01.579875  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:01.716148  117954 node_ready.go:58] node "embed-certs-201087" has status "Ready":"False"
	I0919 17:28:02.006687  117954 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.848273956s)
	I0919 17:28:02.006759  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:02.006773  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:02.007081  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:02.007102  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:02.007112  117954 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:02.007133  117954 main.go:141] libmachine: (embed-certs-201087) Calling .Close
	I0919 17:28:02.008932  117954 main.go:141] libmachine: (embed-certs-201087) DBG | Closing plugin on server side
	I0919 17:28:02.008962  117954 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:02.008978  117954 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:02.010700  117954 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-201087 addons enable metrics-server	
	
	
	I0919 17:28:02.012185  117954 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0919 17:28:00.752550  118166 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 17:28:00.752595  118166 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 17:28:00.776350  118166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:00.936694  118166 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 17:28:02.468030  118166 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.531248482s)
	I0919 17:28:02.468206  118166 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:28:02.504893  118166 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:27:59.824654  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Start
	I0919 17:27:59.824850  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Ensuring networks are active...
	I0919 17:27:59.825530  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Ensuring network default is active
	I0919 17:27:59.825981  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Ensuring network mk-default-k8s-diff-port-210669 is active
	I0919 17:27:59.826533  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Getting domain xml...
	I0919 17:27:59.826949  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Creating domain...
	I0919 17:28:01.299229  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Waiting to get IP...
	I0919 17:28:01.300081  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.300597  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.300705  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:01.300574  118615 retry.go:31] will retry after 235.492912ms: waiting for machine to come up
	I0919 17:28:01.538438  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.539092  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.539116  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:01.539043  118615 retry.go:31] will retry after 385.440803ms: waiting for machine to come up
	I0919 17:28:01.926813  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.927419  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:01.927547  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:01.927467  118615 retry.go:31] will retry after 484.575399ms: waiting for machine to come up
	I0919 17:28:02.414489  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:02.415149  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:02.415201  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:02.415083  118615 retry.go:31] will retry after 409.526581ms: waiting for machine to come up
	I0919 17:28:02.827119  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:02.827824  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:02.827894  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:02.827824  118615 retry.go:31] will retry after 621.042877ms: waiting for machine to come up
	I0919 17:28:02.013439  117954 addons.go:502] enable addons completed in 2.779805661s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0919 17:28:02.538527  118166 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.6 ...
	I0919 17:28:02.538587  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetIP
	I0919 17:28:02.542156  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:02.542761  118166 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0919 17:28:02.542874  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:02.542925  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:02.548030  118166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:02.563215  118166 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 17:28:02.563283  118166 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:02.587307  118166 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I0919 17:28:02.587331  118166 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0919 17:28:02.587386  118166 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 17:28:02.599337  118166 ssh_runner.go:195] Run: which lz4
	I0919 17:28:02.603575  118166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0919 17:28:02.608233  118166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 17:28:02.608264  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0919 17:28:04.167236  118166 docker.go:600] Took 1.563700 seconds to copy over tarball
	I0919 17:28:04.167316  118166 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 17:28:03.080893  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:05.579979  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:03.450573  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:03.451162  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:03.451192  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:03.451127  118615 retry.go:31] will retry after 895.37099ms: waiting for machine to come up
	I0919 17:28:04.348689  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:04.349251  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:04.349288  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:04.349207  118615 retry.go:31] will retry after 978.724274ms: waiting for machine to come up
	I0919 17:28:05.329985  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:05.330527  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:05.330563  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:05.330459  118615 retry.go:31] will retry after 938.346037ms: waiting for machine to come up
	I0919 17:28:06.270319  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:06.270886  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:06.270920  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:06.270829  118615 retry.go:31] will retry after 1.396716228s: waiting for machine to come up
	I0919 17:28:07.669432  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:07.669972  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:07.670005  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:07.669916  118615 retry.go:31] will retry after 2.050486178s: waiting for machine to come up
	I0919 17:28:03.719178  117954 node_ready.go:58] node "embed-certs-201087" has status "Ready":"False"
	I0919 17:28:05.789323  117954 node_ready.go:58] node "embed-certs-201087" has status "Ready":"False"
	I0919 17:28:06.619314  117954 node_ready.go:49] node "embed-certs-201087" has status "Ready":"True"
	I0919 17:28:06.619337  117954 node_ready.go:38] duration metric: took 6.922899139s waiting for node "embed-certs-201087" to be "Ready" ...
	I0919 17:28:06.619347  117954 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:06.628161  117954 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:07.671304  117954 pod_ready.go:92] pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:07.671323  117954 pod_ready.go:81] duration metric: took 1.04310813s waiting for pod "coredns-5dd5756b68-v6ghh" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:07.671333  117954 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.194867  117954 pod_ready.go:92] pod "etcd-embed-certs-201087" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:08.194891  117954 pod_ready.go:81] duration metric: took 523.55053ms waiting for pod "etcd-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.194900  117954 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.200969  117954 pod_ready.go:92] pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:08.200992  117954 pod_ready.go:81] duration metric: took 6.083808ms waiting for pod "kube-apiserver-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.201003  117954 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.206196  117954 pod_ready.go:92] pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:08.206215  117954 pod_ready.go:81] duration metric: took 5.202925ms waiting for pod "kube-controller-manager-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.206227  117954 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7skcp" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:07.021630  118166 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.854258986s)
	I0919 17:28:07.021743  118166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 17:28:07.058796  118166 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0919 17:28:07.070808  118166 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3100 bytes)
	I0919 17:28:07.090682  118166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:07.208130  118166 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 17:28:09.306404  118166 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.098229438s)
	I0919 17:28:09.306506  118166 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:09.328747  118166 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I0919 17:28:09.328772  118166 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0919 17:28:09.328782  118166 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0919 17:28:09.331254  118166 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:28:09.331307  118166 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:09.331340  118166 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:28:09.331275  118166 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:28:09.331490  118166 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:28:09.331496  118166 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0919 17:28:09.331592  118166 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:28:09.331610  118166 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0919 17:28:09.332660  118166 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:28:09.332703  118166 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:28:09.332714  118166 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0919 17:28:09.332753  118166 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0919 17:28:09.332828  118166 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:28:09.332663  118166 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:28:09.332660  118166 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:09.332867  118166 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:28:09.501375  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0919 17:28:09.506192  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:28:09.509288  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0919 17:28:09.511585  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:28:09.514620  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0919 17:28:09.545660  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:28:09.560253  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:28:09.612295  118166 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0919 17:28:09.612356  118166 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0919 17:28:09.612416  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0919 17:28:09.612500  118166 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0919 17:28:09.612521  118166 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:28:09.612549  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0919 17:28:09.617979  118166 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0919 17:28:09.618023  118166 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0919 17:28:09.618064  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0919 17:28:09.618164  118166 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0919 17:28:09.618190  118166 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:28:09.618213  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0919 17:28:09.618366  118166 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0919 17:28:09.618413  118166 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:28:09.618455  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0919 17:28:09.668792  118166 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0919 17:28:09.668867  118166 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:28:09.668915  118166 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0919 17:28:09.725000  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0919 17:28:09.725103  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0919 17:28:09.725177  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0919 17:28:09.725234  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0919 17:28:09.725290  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0919 17:28:09.741751  118166 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0919 17:28:09.956775  118166 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:09.984791  118166 cache_images.go:92] LoadImages completed in 655.988923ms
	W0919 17:28:09.984898  118166 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0919 17:28:09.984983  118166 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 17:28:10.026462  118166 cni.go:84] Creating CNI manager for ""
	I0919 17:28:10.026496  118166 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 17:28:10.026521  118166 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:28:10.026551  118166 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.162 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367105 NodeName:old-k8s-version-367105 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0919 17:28:10.026761  118166 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-367105"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-367105
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.83.162:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:28:10.026859  118166 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-367105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-367105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0919 17:28:10.026938  118166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0919 17:28:10.039453  118166 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:28:10.039529  118166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:28:10.050864  118166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I0919 17:28:10.073204  118166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:28:10.101116  118166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0919 17:28:10.124187  118166 ssh_runner.go:195] Run: grep 192.168.83.162	control-plane.minikube.internal$ /etc/hosts
	I0919 17:28:10.129562  118166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:10.147354  118166 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105 for IP: 192.168.83.162
	I0919 17:28:10.147410  118166 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:10.147617  118166 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 17:28:10.147716  118166 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 17:28:10.147866  118166 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.key
	I0919 17:28:10.147972  118166 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/apiserver.key.d728743e
	I0919 17:28:10.148044  118166 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/proxy-client.key
	I0919 17:28:10.148206  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 17:28:10.148288  118166 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 17:28:10.148313  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:28:10.148368  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 17:28:10.148421  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:28:10.148459  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 17:28:10.148529  118166 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:28:10.149352  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:28:10.177611  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 17:28:10.208372  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:28:10.238007  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 17:28:10.270270  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:28:10.302354  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 17:28:10.334682  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:28:10.367862  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 17:28:10.395587  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:28:10.423581  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 17:28:10.450931  118166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 17:28:10.481139  118166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:28:10.503001  118166 ssh_runner.go:195] Run: openssl version
	I0919 17:28:10.509309  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 17:28:10.521559  118166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 17:28:10.528133  118166 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 17:28:10.528209  118166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 17:28:10.535780  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 17:28:10.546082  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 17:28:10.556742  118166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 17:28:10.563064  118166 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 17:28:10.563123  118166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 17:28:10.570522  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:28:10.584215  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:28:10.598388  118166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:10.605598  118166 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:10.605685  118166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:10.612493  118166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:28:10.624734  118166 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:28:10.631007  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:28:10.637731  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:28:10.644176  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:28:10.651009  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:28:10.658288  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:28:07.580183  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:10.080747  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:09.722921  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:09.723606  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:09.723643  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:09.723503  118615 retry.go:31] will retry after 2.733713117s: waiting for machine to come up
	I0919 17:28:12.458921  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:12.459480  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:12.459509  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:12.459440  118615 retry.go:31] will retry after 3.305096132s: waiting for machine to come up
	I0919 17:28:08.468887  117954 pod_ready.go:92] pod "kube-proxy-7skcp" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:08.523017  117954 pod_ready.go:81] duration metric: took 316.759888ms waiting for pod "kube-proxy-7skcp" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.523049  117954 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.891310  117954 pod_ready.go:92] pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:08.891338  117954 pod_ready.go:81] duration metric: took 368.279572ms waiting for pod "kube-scheduler-embed-certs-201087" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:08.891352  117954 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:11.176623  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:10.666788  118166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:28:10.674589  118166 kubeadm.go:404] StartCluster: {Name:old-k8s-version-367105 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-367105 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.162 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:28:10.674754  118166 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:28:10.698922  118166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:28:10.711867  118166 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:28:10.711893  118166 kubeadm.go:636] restartCluster start
	I0919 17:28:10.711951  118166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:28:10.724053  118166 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:10.725278  118166 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-367105" does not appear in /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:28:10.726051  118166 kubeconfig.go:146] "old-k8s-version-367105" context is missing from /home/jenkins/minikube-integration/17240-65689/kubeconfig - will repair!
	I0919 17:28:10.727121  118166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:10.729493  118166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:28:10.740443  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:10.740543  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:10.759136  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:10.759156  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:10.759205  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:10.775137  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:11.275867  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:11.275978  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:11.291271  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:11.775965  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:11.776072  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:11.792391  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:12.275685  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:12.275772  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:12.291423  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:12.776068  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:12.776155  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:12.790288  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:13.275818  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:13.275926  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:13.288432  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:13.776105  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:13.776209  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:13.789901  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:14.275523  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:14.275624  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:14.293271  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:14.776247  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:14.776336  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:14.791242  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:15.275909  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:15.275997  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:15.289438  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:12.089085  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:14.581304  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:15.766530  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:15.767047  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:15.767079  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:15.767004  118615 retry.go:31] will retry after 3.054481176s: waiting for machine to come up
	I0919 17:28:13.675429  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:15.690052  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:18.176299  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:15.775404  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:15.775498  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:15.788547  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:16.276173  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:16.276263  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:16.288704  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:16.776278  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:16.776354  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:16.788055  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:17.275222  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:17.275295  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:17.286734  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:17.775277  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:17.775406  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:17.787128  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:18.275619  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:18.275691  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:18.288186  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:18.775658  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:18.775756  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:18.791101  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:19.275346  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:19.275440  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:19.291158  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:19.775336  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:19.775414  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:19.788885  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:20.275416  118166 api_server.go:166] Checking apiserver status ...
	I0919 17:28:20.275500  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:20.287923  118166 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:17.078584  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:19.080619  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:21.081237  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:18.822661  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:18.823171  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | unable to find current IP address of domain default-k8s-diff-port-210669 in network mk-default-k8s-diff-port-210669
	I0919 17:28:18.823194  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | I0919 17:28:18.823121  118615 retry.go:31] will retry after 4.38152364s: waiting for machine to come up
	I0919 17:28:20.677944  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:22.704950  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:20.740692  118166 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:28:20.740719  118166 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:28:20.740780  118166 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:28:20.769017  118166 docker.go:462] Stopping containers: [22d49d645db9 957eb4fc8e8a 08335c189fc9 9fa1901a5efd 03f76c90629d 5a83916c0cd9 74289851f17e 66c34702bde2 519a593bc277 055183b8db4b 5cfad067cf99 d6a66ee0e63c e98675b0244a 51816ee18c21]
	I0919 17:28:20.769103  118166 ssh_runner.go:195] Run: docker stop 22d49d645db9 957eb4fc8e8a 08335c189fc9 9fa1901a5efd 03f76c90629d 5a83916c0cd9 74289851f17e 66c34702bde2 519a593bc277 055183b8db4b 5cfad067cf99 d6a66ee0e63c e98675b0244a 51816ee18c21
	I0919 17:28:20.791684  118166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:28:20.807370  118166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:28:20.817050  118166 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:28:20.817119  118166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:20.826488  118166 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:20.826518  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:20.974916  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:21.798935  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:22.039917  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:22.137435  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:22.264242  118166 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:28:22.264325  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:22.277425  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:22.793356  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:23.292995  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:23.793534  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:24.293836  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:24.317683  118166 api_server.go:72] duration metric: took 2.053455192s to wait for apiserver process to appear ...
	I0919 17:28:24.317711  118166 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:28:24.317733  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:23.581781  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:26.079585  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:23.206711  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.207320  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Found IP for machine: 192.168.61.204
	I0919 17:28:23.207350  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Reserving static IP address...
	I0919 17:28:23.207368  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has current primary IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.207760  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-210669", mac: "52:54:00:76:95:3e", ip: "192.168.61.204"} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.207786  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Reserved static IP address: 192.168.61.204
	I0919 17:28:23.207812  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | skip adding static IP to network mk-default-k8s-diff-port-210669 - found existing host DHCP lease matching {name: "default-k8s-diff-port-210669", mac: "52:54:00:76:95:3e", ip: "192.168.61.204"}
	I0919 17:28:23.207834  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Getting to WaitForSSH function...
	I0919 17:28:23.207851  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Waiting for SSH to be available...
	I0919 17:28:23.210607  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.210954  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.210987  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.211247  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Using SSH client type: external
	I0919 17:28:23.211290  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Using SSH private key: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa (-rw-------)
	I0919 17:28:23.211330  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 17:28:23.211352  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | About to run SSH command:
	I0919 17:28:23.211370  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | exit 0
	I0919 17:28:23.314653  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | SSH cmd err, output: <nil>: 
	I0919 17:28:23.315045  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetConfigRaw
	I0919 17:28:23.315734  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetIP
	I0919 17:28:23.319227  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.319704  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.319758  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.320126  118443 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/config.json ...
	I0919 17:28:23.320520  118443 machine.go:88] provisioning docker machine ...
	I0919 17:28:23.320547  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:23.320778  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetMachineName
	I0919 17:28:23.320998  118443 buildroot.go:166] provisioning hostname "default-k8s-diff-port-210669"
	I0919 17:28:23.321025  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetMachineName
	I0919 17:28:23.321209  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:23.323799  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.324209  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.324250  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.324416  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:23.324616  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:23.324784  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:23.324938  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:23.325106  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:23.325638  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:23.325663  118443 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-210669 && echo "default-k8s-diff-port-210669" | sudo tee /etc/hostname
	I0919 17:28:23.479575  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-210669
	
	I0919 17:28:23.479612  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:23.483121  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.483593  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.483775  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.484124  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:23.484324  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:23.484531  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:23.484702  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:23.484904  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:23.485386  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:23.485411  118443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-210669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-210669/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-210669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 17:28:23.637002  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 17:28:23.637036  118443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17240-65689/.minikube CaCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17240-65689/.minikube}
	I0919 17:28:23.637055  118443 buildroot.go:174] setting up certificates
	I0919 17:28:23.637074  118443 provision.go:83] configureAuth start
	I0919 17:28:23.637084  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetMachineName
	I0919 17:28:23.637356  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetIP
	I0919 17:28:23.640423  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.640798  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.640837  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.641048  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:23.643753  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.644278  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.644315  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.644613  118443 provision.go:138] copyHostCerts
	I0919 17:28:23.644717  118443 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem, removing ...
	I0919 17:28:23.644738  118443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem
	I0919 17:28:23.644800  118443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/ca.pem (1078 bytes)
	I0919 17:28:23.644924  118443 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem, removing ...
	I0919 17:28:23.644933  118443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem
	I0919 17:28:23.644964  118443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/cert.pem (1123 bytes)
	I0919 17:28:23.645087  118443 exec_runner.go:144] found /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem, removing ...
	I0919 17:28:23.645111  118443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem
	I0919 17:28:23.645161  118443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17240-65689/.minikube/key.pem (1675 bytes)
	I0919 17:28:23.645263  118443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-210669 san=[192.168.61.204 192.168.61.204 localhost 127.0.0.1 minikube default-k8s-diff-port-210669]
	I0919 17:28:23.931167  118443 provision.go:172] copyRemoteCerts
	I0919 17:28:23.931245  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 17:28:23.931289  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:23.934581  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.934976  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:23.935027  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:23.935217  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:23.935422  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:23.935648  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:23.935834  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:24.037393  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0919 17:28:24.067930  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 17:28:24.099283  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 17:28:24.124401  118443 provision.go:86] duration metric: configureAuth took 487.304492ms
	I0919 17:28:24.124438  118443 buildroot.go:189] setting minikube options for container-runtime
	I0919 17:28:24.124677  118443 config.go:182] Loaded profile config "default-k8s-diff-port-210669": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:28:24.124718  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:24.125039  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:24.127751  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.128205  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:24.128243  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.128384  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:24.128612  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.128804  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.128975  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:24.129163  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:24.129641  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:24.129670  118443 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0919 17:28:24.259804  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0919 17:28:24.259836  118443 buildroot.go:70] root file system type: tmpfs
	I0919 17:28:24.260003  118443 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0919 17:28:24.260032  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:24.262928  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.263287  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:24.263336  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.263599  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:24.263801  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.263968  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.264147  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:24.264344  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:24.264878  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:24.264991  118443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0919 17:28:24.422896  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0919 17:28:24.422934  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:24.425980  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.426412  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:24.426449  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:24.426673  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:24.426910  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.427107  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:24.427280  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:24.427477  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:24.427822  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:24.427853  118443 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0919 17:28:25.473099  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0919 17:28:25.473133  118443 machine.go:91] provisioned docker machine in 2.152590932s
	I0919 17:28:25.473149  118443 start.go:300] post-start starting for "default-k8s-diff-port-210669" (driver="kvm2")
	I0919 17:28:25.473162  118443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 17:28:25.473177  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:25.473551  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 17:28:25.473590  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:25.476418  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.476798  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:25.476824  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.476998  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:25.477211  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:25.477432  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:25.477658  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:25.574921  118443 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 17:28:25.581069  118443 info.go:137] Remote host: Buildroot 2021.02.12
	I0919 17:28:25.581094  118443 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/addons for local assets ...
	I0919 17:28:25.581170  118443 filesync.go:126] Scanning /home/jenkins/minikube-integration/17240-65689/.minikube/files for local assets ...
	I0919 17:28:25.581265  118443 filesync.go:149] local asset: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem -> 733972.pem in /etc/ssl/certs
	I0919 17:28:25.581386  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 17:28:25.593234  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:28:25.622467  118443 start.go:303] post-start completed in 149.298013ms
	I0919 17:28:25.622496  118443 fix.go:56] fixHost completed within 25.823348515s
	I0919 17:28:25.622522  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:25.625526  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.625905  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:25.625936  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.626137  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:25.626393  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:25.626601  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:25.626781  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:25.627017  118443 main.go:141] libmachine: Using SSH client type: native
	I0919 17:28:25.627436  118443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.61.204 22 <nil> <nil>}
	I0919 17:28:25.627453  118443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0919 17:28:25.766598  118443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695144505.748883102
	
	I0919 17:28:25.766620  118443 fix.go:206] guest clock: 1695144505.748883102
	I0919 17:28:25.766640  118443 fix.go:219] Guest: 2023-09-19 17:28:25.748883102 +0000 UTC Remote: 2023-09-19 17:28:25.622500704 +0000 UTC m=+47.533944450 (delta=126.382398ms)
	I0919 17:28:25.766666  118443 fix.go:190] guest clock delta is within tolerance: 126.382398ms
	I0919 17:28:25.766673  118443 start.go:83] releasing machines lock for "default-k8s-diff-port-210669", held for 25.967573745s
	I0919 17:28:25.766697  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:25.766985  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetIP
	I0919 17:28:25.769691  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.770107  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:25.770135  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.770283  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:25.770839  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:25.771024  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:25.771111  118443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 17:28:25.771153  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:25.771240  118443 ssh_runner.go:195] Run: cat /version.json
	I0919 17:28:25.771259  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:25.773971  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.774165  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.774341  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:25.774375  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.774527  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:25.774531  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:25.774561  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:25.774721  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:25.774817  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:25.774874  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:25.775007  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:25.775026  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:25.775159  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:25.775217  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:25.885777  118443 ssh_runner.go:195] Run: systemctl --version
	I0919 17:28:25.891641  118443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 17:28:25.897353  118443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 17:28:25.897418  118443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 17:28:25.912534  118443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 17:28:25.912555  118443 start.go:469] detecting cgroup driver to use...
	I0919 17:28:25.912660  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:28:25.933576  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0919 17:28:25.943990  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0919 17:28:25.955438  118443 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0919 17:28:25.955502  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0919 17:28:25.966067  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:28:25.977044  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0919 17:28:25.986836  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0919 17:28:25.996560  118443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 17:28:26.006926  118443 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0919 17:28:26.016682  118443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 17:28:26.025669  118443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 17:28:26.034526  118443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:26.144292  118443 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0919 17:28:26.162554  118443 start.go:469] detecting cgroup driver to use...
	I0919 17:28:26.162649  118443 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0919 17:28:26.180057  118443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:28:26.195193  118443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 17:28:26.220317  118443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 17:28:26.233032  118443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:28:26.244800  118443 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0919 17:28:26.273070  118443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0919 17:28:26.285733  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 17:28:26.302441  118443 ssh_runner.go:195] Run: which cri-dockerd
	I0919 17:28:26.306219  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0919 17:28:26.314940  118443 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0919 17:28:26.332005  118443 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0919 17:28:26.445120  118443 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0919 17:28:26.565085  118443 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0919 17:28:26.565129  118443 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0919 17:28:26.584480  118443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:26.703839  118443 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0919 17:28:25.177476  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:27.179756  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:28.189155  118443 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.48527675s)
	I0919 17:28:28.189217  118443 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 17:28:28.299462  118443 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0919 17:28:28.413257  118443 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0919 17:28:28.534958  118443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:28.677741  118443 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0919 17:28:28.697610  118443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 17:28:28.827218  118443 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0919 17:28:28.917829  118443 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0919 17:28:28.917915  118443 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0919 17:28:28.924799  118443 start.go:537] Will wait 60s for crictl version
	I0919 17:28:28.924899  118443 ssh_runner.go:195] Run: which crictl
	I0919 17:28:28.929486  118443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 17:28:28.997557  118443 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0919 17:28:28.997656  118443 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:28:29.035223  118443 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0919 17:28:29.318993  118166 api_server.go:269] stopped: https://192.168.83.162:8443/healthz: Get "https://192.168.83.162:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 17:28:29.319045  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:29.810784  118166 api_server.go:279] https://192.168.83.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:28:29.810808  118166 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:28:30.311504  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:30.319227  118166 api_server.go:279] https://192.168.83.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:28:30.319254  118166 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:28:28.080311  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:30.579838  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:29.072508  118443 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0919 17:28:29.072564  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetIP
	I0919 17:28:29.075897  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:29.076365  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:29.076402  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:29.076735  118443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0919 17:28:29.081117  118443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:29.096731  118443 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0919 17:28:29.096840  118443 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:29.117664  118443 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:28:29.117694  118443 docker.go:566] Images already preloaded, skipping extraction
	I0919 17:28:29.117761  118443 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:29.138620  118443 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:28:29.138651  118443 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:28:29.138717  118443 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0919 17:28:29.168073  118443 cni.go:84] Creating CNI manager for ""
	I0919 17:28:29.168107  118443 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:28:29.168131  118443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0919 17:28:29.168158  118443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.204 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-210669 NodeName:default-k8s-diff-port-210669 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 17:28:29.168361  118443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.204
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-210669"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 17:28:29.168453  118443 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-210669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-210669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0919 17:28:29.168518  118443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0919 17:28:29.178441  118443 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 17:28:29.178511  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 17:28:29.187124  118443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0919 17:28:29.203784  118443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 17:28:29.220058  118443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0919 17:28:29.239076  118443 ssh_runner.go:195] Run: grep 192.168.61.204	control-plane.minikube.internal$ /etc/hosts
	I0919 17:28:29.243363  118443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 17:28:29.255550  118443 certs.go:56] Setting up /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669 for IP: 192.168.61.204
	I0919 17:28:29.255585  118443 certs.go:190] acquiring lock for shared ca certs: {Name:mkf975c4ed215d047afb89379d3c517cec3820b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:29.255762  118443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key
	I0919 17:28:29.255812  118443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key
	I0919 17:28:29.255936  118443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/client.key
	I0919 17:28:29.256018  118443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/apiserver.key.9889daca
	I0919 17:28:29.256069  118443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/proxy-client.key
	I0919 17:28:29.256223  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem (1338 bytes)
	W0919 17:28:29.256263  118443 certs.go:433] ignoring /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397_empty.pem, impossibly tiny 0 bytes
	I0919 17:28:29.256279  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca-key.pem (1679 bytes)
	I0919 17:28:29.256310  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/ca.pem (1078 bytes)
	I0919 17:28:29.256339  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/cert.pem (1123 bytes)
	I0919 17:28:29.256373  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/certs/home/jenkins/minikube-integration/17240-65689/.minikube/certs/key.pem (1675 bytes)
	I0919 17:28:29.256419  118443 certs.go:437] found cert: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem (1708 bytes)
	I0919 17:28:29.257204  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0919 17:28:29.284186  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 17:28:29.312417  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 17:28:29.337219  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/default-k8s-diff-port-210669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 17:28:29.364839  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 17:28:29.393778  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 17:28:29.418768  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 17:28:29.442705  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 17:28:29.466964  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/certs/73397.pem --> /usr/share/ca-certificates/73397.pem (1338 bytes)
	I0919 17:28:29.489014  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/ssl/certs/733972.pem --> /usr/share/ca-certificates/733972.pem (1708 bytes)
	I0919 17:28:29.514967  118443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17240-65689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 17:28:29.541558  118443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 17:28:29.561360  118443 ssh_runner.go:195] Run: openssl version
	I0919 17:28:29.568814  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73397.pem && ln -fs /usr/share/ca-certificates/73397.pem /etc/ssl/certs/73397.pem"
	I0919 17:28:29.582011  118443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73397.pem
	I0919 17:28:29.588005  118443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 19 16:39 /usr/share/ca-certificates/73397.pem
	I0919 17:28:29.588075  118443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73397.pem
	I0919 17:28:29.595128  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73397.pem /etc/ssl/certs/51391683.0"
	I0919 17:28:29.608246  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/733972.pem && ln -fs /usr/share/ca-certificates/733972.pem /etc/ssl/certs/733972.pem"
	I0919 17:28:29.621271  118443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/733972.pem
	I0919 17:28:29.627219  118443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 19 16:39 /usr/share/ca-certificates/733972.pem
	I0919 17:28:29.627312  118443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/733972.pem
	I0919 17:28:29.634514  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/733972.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 17:28:29.645493  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 17:28:29.658889  118443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:29.663353  118443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 19 16:35 /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:29.663412  118443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 17:28:29.670918  118443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 17:28:29.684123  118443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0919 17:28:29.689664  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 17:28:29.696189  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 17:28:29.703616  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 17:28:29.709898  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 17:28:29.715995  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 17:28:29.722209  118443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 17:28:29.728062  118443 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-210669 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-210669 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.204 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts
:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 17:28:29.728214  118443 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:28:29.750751  118443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 17:28:29.760820  118443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0919 17:28:29.760839  118443 kubeadm.go:636] restartCluster start
	I0919 17:28:29.760889  118443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 17:28:29.769922  118443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:29.770951  118443 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-210669" does not appear in /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:28:29.771789  118443 kubeconfig.go:146] "default-k8s-diff-port-210669" context is missing from /home/jenkins/minikube-integration/17240-65689/kubeconfig - will repair!
	I0919 17:28:29.772886  118443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:29.774679  118443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 17:28:29.783646  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:29.783700  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:29.794830  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:29.794848  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:29.794899  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:29.805598  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:30.306721  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:30.306803  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:30.319438  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:30.805758  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:30.805865  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:30.821699  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:31.306281  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:31.306354  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:31.318959  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:31.806581  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:31.806677  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:31.823532  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:32.305781  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:32.305849  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:32.320525  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:32.805722  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:32.805810  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:32.818266  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:30.811760  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:30.849952  118166 api_server.go:279] https://192.168.83.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0919 17:28:30.849986  118166 api_server.go:103] status: https://192.168.83.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0919 17:28:31.311538  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:31.319252  118166 api_server.go:279] https://192.168.83.162:8443/healthz returned 200:
	ok
	I0919 17:28:31.327526  118166 api_server.go:141] control plane version: v1.16.0
	I0919 17:28:31.327552  118166 api_server.go:131] duration metric: took 7.009832667s to wait for apiserver health ...
	I0919 17:28:31.327563  118166 cni.go:84] Creating CNI manager for ""
	I0919 17:28:31.327578  118166 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 17:28:31.327587  118166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:28:31.335681  118166 system_pods.go:59] 7 kube-system pods found
	I0919 17:28:31.335707  118166 system_pods.go:61] "coredns-5644d7b6d9-wjqc6" [92117877-e0fe-4d40-9bce-aaadfa89e39b] Running
	I0919 17:28:31.335714  118166 system_pods.go:61] "etcd-old-k8s-version-367105" [d9ece4bf-b2be-4be9-876c-40d3c5cae7e8] Running
	I0919 17:28:31.335720  118166 system_pods.go:61] "kube-apiserver-old-k8s-version-367105" [2ff5f01a-690e-4235-bc8a-bf0b1b8124bc] Running
	I0919 17:28:31.335729  118166 system_pods.go:61] "kube-controller-manager-old-k8s-version-367105" [afa631bc-9808-4846-bb97-09849195a5a2] Pending
	I0919 17:28:31.335734  118166 system_pods.go:61] "kube-proxy-r2vs7" [13a9fcc3-1efb-4196-939b-8e97458c58a2] Running
	I0919 17:28:31.335741  118166 system_pods.go:61] "kube-scheduler-old-k8s-version-367105" [9040c92c-b4b9-42f3-8f0e-adb889ebf770] Running
	I0919 17:28:31.335747  118166 system_pods.go:61] "storage-provisioner" [c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12] Running
	I0919 17:28:31.335755  118166 system_pods.go:74] duration metric: took 8.160329ms to wait for pod list to return data ...
	I0919 17:28:31.335765  118166 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:28:31.339612  118166 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:28:31.339638  118166 node_conditions.go:123] node cpu capacity is 2
	I0919 17:28:31.339651  118166 node_conditions.go:105] duration metric: took 3.877078ms to run NodePressure ...
	I0919 17:28:31.339673  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:31.615435  118166 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:28:31.619142  118166 kubeadm.go:787] kubelet initialised
	I0919 17:28:31.619165  118166 kubeadm.go:788] duration metric: took 3.702382ms waiting for restarted kubelet to initialise ...
	I0919 17:28:31.619176  118166 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:31.623015  118166 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.628427  118166 pod_ready.go:92] pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:31.628446  118166 pod_ready.go:81] duration metric: took 5.409244ms waiting for pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.628454  118166 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.635310  118166 pod_ready.go:92] pod "etcd-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:31.635329  118166 pod_ready.go:81] duration metric: took 6.869978ms waiting for pod "etcd-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.635338  118166 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.639559  118166 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:31.639582  118166 pod_ready.go:81] duration metric: took 4.236475ms waiting for pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:31.639594  118166 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:32.639975  118166 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:32.640001  118166 pod_ready.go:81] duration metric: took 1.000396025s waiting for pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:32.640015  118166 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r2vs7" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:32.931447  118166 pod_ready.go:92] pod "kube-proxy-r2vs7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:32.931472  118166 pod_ready.go:81] duration metric: took 291.449117ms waiting for pod "kube-proxy-r2vs7" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:32.931485  118166 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:33.331297  118166 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:33.331318  118166 pod_ready.go:81] duration metric: took 399.825618ms waiting for pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:33.331333  118166 pod_ready.go:38] duration metric: took 1.71214178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:33.331352  118166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:28:33.343753  118166 ops.go:34] apiserver oom_adj: -16
	I0919 17:28:33.343778  118166 kubeadm.go:640] restartCluster took 22.631877282s
	I0919 17:28:33.343788  118166 kubeadm.go:406] StartCluster complete in 22.669208794s
	I0919 17:28:33.343809  118166 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:33.343894  118166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:28:33.346693  118166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:33.347405  118166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:28:33.347422  118166 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:28:33.347505  118166 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-367105"
	I0919 17:28:33.347529  118166 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-367105"
	I0919 17:28:33.347534  118166 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-367105"
	W0919 17:28:33.347543  118166 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:28:33.347546  118166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-367105"
	I0919 17:28:33.347555  118166 addons.go:69] Setting dashboard=true in profile "old-k8s-version-367105"
	I0919 17:28:33.347584  118166 addons.go:231] Setting addon dashboard=true in "old-k8s-version-367105"
	W0919 17:28:33.347593  118166 addons.go:240] addon dashboard should already be in state true
	I0919 17:28:33.347593  118166 host.go:66] Checking if "old-k8s-version-367105" exists ...
	I0919 17:28:33.347627  118166 config.go:182] Loaded profile config "old-k8s-version-367105": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 17:28:33.347636  118166 host.go:66] Checking if "old-k8s-version-367105" exists ...
	I0919 17:28:33.347653  118166 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-367105"
	I0919 17:28:33.347678  118166 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-367105"
	W0919 17:28:33.347686  118166 addons.go:240] addon metrics-server should already be in state true
	I0919 17:28:33.347698  118166 cache.go:107] acquiring lock: {Name:mk39dabf87437641a7731807e46502447a060f17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:28:33.347725  118166 host.go:66] Checking if "old-k8s-version-367105" exists ...
	I0919 17:28:33.347775  118166 cache.go:115] /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0919 17:28:33.347786  118166 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 94.728µs
	I0919 17:28:33.347797  118166 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0919 17:28:33.347810  118166 cache.go:87] Successfully saved all images to host disk.
	I0919 17:28:33.347962  118166 config.go:182] Loaded profile config "old-k8s-version-367105": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0919 17:28:33.347963  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.347963  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.347975  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.347995  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.348010  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.348091  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.348096  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.348124  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.348282  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.348313  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.365175  118166 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-367105" context rescaled to 1 replicas
	I0919 17:28:33.365207  118166 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.162 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 17:28:33.367979  118166 out.go:177] * Verifying Kubernetes components...
	I0919 17:28:33.367533  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0919 17:28:33.369383  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:28:33.367593  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0919 17:28:33.370221  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.370239  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.370700  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.370716  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.370808  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.370818  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.371217  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.371397  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.371852  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.371876  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.372431  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.372466  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.385472  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0919 17:28:33.386107  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.386694  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.386712  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.387311  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.387627  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.388611  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0919 17:28:33.389137  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.389893  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.389912  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.389990  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44381
	I0919 17:28:33.390453  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.390516  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.390766  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.390801  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.390945  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.390959  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.391110  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.391561  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.392168  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.392184  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.398286  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0919 17:28:33.398633  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.399491  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.399510  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.401298  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.401821  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.402561  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0919 17:28:33.403639  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.403915  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:28:33.406007  118166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:29.676932  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:31.676974  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:33.404547  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.406594  118166 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-367105"
	W0919 17:28:33.407641  118166 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:28:33.407681  118166 host.go:66] Checking if "old-k8s-version-367105" exists ...
	I0919 17:28:33.408045  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.408067  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.408405  118166 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:33.408421  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:28:33.408439  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:28:33.408493  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.409437  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.409754  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.412737  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:28:33.414520  118166 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 17:28:33.413526  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.414311  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:28:33.414601  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0919 17:28:33.416004  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:33.416026  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.417489  118166 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0919 17:28:33.418813  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 17:28:33.418829  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 17:28:33.418843  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:28:33.416678  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0919 17:28:33.416702  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:28:33.416722  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.419297  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:28:33.419307  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.419535  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.419555  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.419558  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:28:33.419713  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.419725  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.420051  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.420241  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:28:33.420462  118166 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:33.420484  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:28:33.420768  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.421027  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.423948  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.424321  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:33.424358  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.424640  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:28:33.424692  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.424910  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:28:33.424956  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:28:33.425055  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:28:33.425149  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:28:33.426997  118166 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:28:33.425688  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:28:33.425729  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:33.428239  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.428285  118166 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:28:33.428301  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:28:33.428322  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:28:33.428499  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:28:33.428759  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:28:33.428903  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:28:33.431761  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.432185  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:33.432213  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.432371  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:28:33.432574  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:28:33.432719  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:28:33.432867  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:28:33.434560  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45369
	I0919 17:28:33.434923  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.435427  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.435445  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.435861  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.436467  118166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:33.436489  118166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:33.457070  118166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I0919 17:28:33.457556  118166 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:33.458112  118166 main.go:141] libmachine: Using API Version  1
	I0919 17:28:33.458142  118166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:33.458544  118166 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:33.458734  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetState
	I0919 17:28:33.460326  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .DriverName
	I0919 17:28:33.460560  118166 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:33.460578  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:28:33.460596  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHHostname
	I0919 17:28:33.465355  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHPort
	I0919 17:28:33.465422  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.465454  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:9c:55", ip: ""} in network mk-old-k8s-version-367105: {Iface:virbr5 ExpiryTime:2023-09-19 18:27:47 +0000 UTC Type:0 Mac:52:54:00:5d:9c:55 Iaid: IPaddr:192.168.83.162 Prefix:24 Hostname:old-k8s-version-367105 Clientid:01:52:54:00:5d:9c:55}
	I0919 17:28:33.465480  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | domain old-k8s-version-367105 has defined IP address 192.168.83.162 and MAC address 52:54:00:5d:9c:55 in network mk-old-k8s-version-367105
	I0919 17:28:33.465505  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHKeyPath
	I0919 17:28:33.465873  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .GetSSHUsername
	I0919 17:28:33.466027  118166 sshutil.go:53] new ssh client: &{IP:192.168.83.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/old-k8s-version-367105/id_rsa Username:docker}
	I0919 17:28:33.569150  118166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:33.591813  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 17:28:33.591844  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 17:28:33.596727  118166 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:28:33.596752  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:28:33.659962  118166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:33.665748  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 17:28:33.665770  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 17:28:33.669114  118166 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:28:33.669135  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:28:33.753121  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 17:28:33.753142  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 17:28:33.753229  118166 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:28:33.753250  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:28:33.810731  118166 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-367105" to be "Ready" ...
	I0919 17:28:33.810993  118166 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I0919 17:28:33.811018  118166 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:28:33.811027  118166 cache_images.go:262] succeeded pushing to: old-k8s-version-367105
	I0919 17:28:33.811039  118166 cache_images.go:263] failed pushing to: 
	I0919 17:28:33.811062  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:33.811075  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:33.811234  118166 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:28:33.811472  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Closing plugin on server side
	I0919 17:28:33.811507  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:33.811534  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:33.811544  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:33.811554  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:33.813316  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:33.813336  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:33.817669  118166 node_ready.go:49] node "old-k8s-version-367105" has status "Ready":"True"
	I0919 17:28:33.817692  118166 node_ready.go:38] duration metric: took 6.929079ms waiting for node "old-k8s-version-367105" to be "Ready" ...
	I0919 17:28:33.817705  118166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:33.822812  118166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:33.851407  118166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:28:33.913770  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 17:28:33.913797  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 17:28:33.950927  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 17:28:33.950958  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 17:28:34.137466  118166 pod_ready.go:92] pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:34.137487  118166 pod_ready.go:81] duration metric: took 314.649939ms waiting for pod "coredns-5644d7b6d9-wjqc6" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:34.137498  118166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:34.140838  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 17:28:34.140858  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 17:28:34.225095  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 17:28:34.225118  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 17:28:34.279730  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 17:28:34.279761  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 17:28:34.313885  118166 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:34.313909  118166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 17:28:34.342811  118166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:34.367766  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.367792  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.367861  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.367897  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.368233  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.368264  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.368276  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.368290  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.368302  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.368315  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.368329  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.368338  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.368533  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.368551  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.368564  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.368574  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.368579  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.368595  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.368880  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Closing plugin on server side
	I0919 17:28:34.368889  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.368921  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.534433  118166 pod_ready.go:92] pod "etcd-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:34.534462  118166 pod_ready.go:81] duration metric: took 396.956083ms waiting for pod "etcd-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:34.534475  118166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:34.621799  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.621835  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.622190  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.622209  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.622221  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.622232  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.622506  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Closing plugin on server side
	I0919 17:28:34.622554  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.622568  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.622581  118166 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-367105"
	I0919 17:28:34.839414  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.839436  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.839794  118166 main.go:141] libmachine: (old-k8s-version-367105) DBG | Closing plugin on server side
	I0919 17:28:34.839844  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.839866  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.839884  118166 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:34.839897  118166 main.go:141] libmachine: (old-k8s-version-367105) Calling .Close
	I0919 17:28:34.840130  118166 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:34.840147  118166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:34.842086  118166 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-367105 addons enable metrics-server	
	
	
	I0919 17:28:34.843765  118166 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0919 17:28:34.845190  118166 addons.go:502] enable addons completed in 1.497764684s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0919 17:28:34.933551  118166 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:34.933578  118166 pod_ready.go:81] duration metric: took 399.093964ms waiting for pod "kube-apiserver-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:34.933592  118166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:35.334181  118166 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:35.334200  118166 pod_ready.go:81] duration metric: took 400.600195ms waiting for pod "kube-controller-manager-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:35.334210  118166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r2vs7" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:33.083270  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:35.577770  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:33.305773  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:33.305854  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:33.317547  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:33.806311  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:33.806406  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:33.822314  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:34.305870  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:34.305941  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:34.322046  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:34.806650  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:34.806749  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:34.822788  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:35.305916  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:35.306001  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:35.318420  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:35.805695  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:35.805804  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:35.817780  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:36.306274  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:36.306361  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:36.318073  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:36.806710  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:36.806790  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:36.818294  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:37.305680  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:37.305768  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:37.318510  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:37.805710  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:37.805793  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:37.817870  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:34.177415  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:36.676733  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:37.637091  118166 pod_ready.go:102] pod "kube-proxy-r2vs7" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:38.137216  118166 pod_ready.go:92] pod "kube-proxy-r2vs7" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:38.137238  118166 pod_ready.go:81] duration metric: took 2.803022384s waiting for pod "kube-proxy-r2vs7" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:38.137246  118166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:38.142245  118166 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:38.142265  118166 pod_ready.go:81] duration metric: took 5.011936ms waiting for pod "kube-scheduler-old-k8s-version-367105" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:38.142274  118166 pod_ready.go:38] duration metric: took 4.324557379s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:38.142293  118166 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:28:38.142336  118166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:38.155703  118166 api_server.go:72] duration metric: took 4.79046606s to wait for apiserver process to appear ...
	I0919 17:28:38.155723  118166 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:28:38.155754  118166 api_server.go:253] Checking apiserver healthz at https://192.168.83.162:8443/healthz ...
	I0919 17:28:38.162113  118166 api_server.go:279] https://192.168.83.162:8443/healthz returned 200:
	ok
	I0919 17:28:38.163321  118166 api_server.go:141] control plane version: v1.16.0
	I0919 17:28:38.163346  118166 api_server.go:131] duration metric: took 7.614903ms to wait for apiserver health ...
	I0919 17:28:38.163356  118166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:28:38.334602  118166 system_pods.go:59] 7 kube-system pods found
	I0919 17:28:38.334627  118166 system_pods.go:61] "coredns-5644d7b6d9-wjqc6" [92117877-e0fe-4d40-9bce-aaadfa89e39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:28:38.334634  118166 system_pods.go:61] "etcd-old-k8s-version-367105" [d9ece4bf-b2be-4be9-876c-40d3c5cae7e8] Running
	I0919 17:28:38.334639  118166 system_pods.go:61] "kube-apiserver-old-k8s-version-367105" [2ff5f01a-690e-4235-bc8a-bf0b1b8124bc] Running
	I0919 17:28:38.334645  118166 system_pods.go:61] "kube-controller-manager-old-k8s-version-367105" [afa631bc-9808-4846-bb97-09849195a5a2] Running
	I0919 17:28:38.334649  118166 system_pods.go:61] "kube-proxy-r2vs7" [13a9fcc3-1efb-4196-939b-8e97458c58a2] Running
	I0919 17:28:38.334653  118166 system_pods.go:61] "kube-scheduler-old-k8s-version-367105" [9040c92c-b4b9-42f3-8f0e-adb889ebf770] Running
	I0919 17:28:38.334660  118166 system_pods.go:61] "storage-provisioner" [c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:28:38.334668  118166 system_pods.go:74] duration metric: took 171.304986ms to wait for pod list to return data ...
	I0919 17:28:38.334677  118166 default_sa.go:34] waiting for default service account to be created ...
	I0919 17:28:38.531449  118166 default_sa.go:45] found service account: "default"
	I0919 17:28:38.531478  118166 default_sa.go:55] duration metric: took 196.792113ms for default service account to be created ...
	I0919 17:28:38.531486  118166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 17:28:38.733778  118166 system_pods.go:86] 7 kube-system pods found
	I0919 17:28:38.733806  118166 system_pods.go:89] "coredns-5644d7b6d9-wjqc6" [92117877-e0fe-4d40-9bce-aaadfa89e39b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:28:38.733815  118166 system_pods.go:89] "etcd-old-k8s-version-367105" [d9ece4bf-b2be-4be9-876c-40d3c5cae7e8] Running
	I0919 17:28:38.733820  118166 system_pods.go:89] "kube-apiserver-old-k8s-version-367105" [2ff5f01a-690e-4235-bc8a-bf0b1b8124bc] Running
	I0919 17:28:38.733825  118166 system_pods.go:89] "kube-controller-manager-old-k8s-version-367105" [afa631bc-9808-4846-bb97-09849195a5a2] Running
	I0919 17:28:38.733829  118166 system_pods.go:89] "kube-proxy-r2vs7" [13a9fcc3-1efb-4196-939b-8e97458c58a2] Running
	I0919 17:28:38.733833  118166 system_pods.go:89] "kube-scheduler-old-k8s-version-367105" [9040c92c-b4b9-42f3-8f0e-adb889ebf770] Running
	I0919 17:28:38.733836  118166 system_pods.go:89] "storage-provisioner" [c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12] Running
	I0919 17:28:38.733844  118166 system_pods.go:126] duration metric: took 202.351043ms to wait for k8s-apps to be running ...
	I0919 17:28:38.733850  118166 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 17:28:38.733898  118166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:28:38.748191  118166 system_svc.go:56] duration metric: took 14.320698ms WaitForService to wait for kubelet.
	I0919 17:28:38.748217  118166 kubeadm.go:581] duration metric: took 5.382985556s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0919 17:28:38.748248  118166 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:28:38.933107  118166 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:28:38.933141  118166 node_conditions.go:123] node cpu capacity is 2
	I0919 17:28:38.933153  118166 node_conditions.go:105] duration metric: took 184.899881ms to run NodePressure ...
	I0919 17:28:38.933166  118166 start.go:228] waiting for startup goroutines ...
	I0919 17:28:38.933175  118166 start.go:233] waiting for cluster config update ...
	I0919 17:28:38.933189  118166 start.go:242] writing updated cluster config ...
	I0919 17:28:38.933508  118166 ssh_runner.go:195] Run: rm -f paused
	I0919 17:28:38.985347  118166 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I0919 17:28:38.987338  118166 out.go:177] 
	W0919 17:28:38.988754  118166 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0919 17:28:38.990135  118166 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0919 17:28:38.991507  118166 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-367105" cluster and "default" namespace by default
	I0919 17:28:37.579008  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:39.579262  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:41.580118  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:38.306574  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:38.306666  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:38.318077  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:38.806267  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:38.806346  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:38.817962  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:39.306607  118443 api_server.go:166] Checking apiserver status ...
	I0919 17:28:39.306698  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0919 17:28:39.319967  118443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0919 17:28:39.784752  118443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0919 17:28:39.784787  118443 kubeadm.go:1128] stopping kube-system containers ...
	I0919 17:28:39.784872  118443 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0919 17:28:39.806834  118443 docker.go:462] Stopping containers: [6888a6629685 becce2d47f20 540111517b00 96891569eed9 b10bc818a093 8106dfc175e4 0907f07f2b53 2f027f297302 dffb459973d5 656fbec43c6e 9d9847c2f140 2a9bf3e4e96a 241a7d4551a3 ca227e9b51e8 71d88a718599 6c075c67b020]
	I0919 17:28:39.806951  118443 ssh_runner.go:195] Run: docker stop 6888a6629685 becce2d47f20 540111517b00 96891569eed9 b10bc818a093 8106dfc175e4 0907f07f2b53 2f027f297302 dffb459973d5 656fbec43c6e 9d9847c2f140 2a9bf3e4e96a 241a7d4551a3 ca227e9b51e8 71d88a718599 6c075c67b020
	I0919 17:28:39.829320  118443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 17:28:39.845396  118443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 17:28:39.854888  118443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 17:28:39.854949  118443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:39.864271  118443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0919 17:28:39.864299  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:39.997488  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:41.071017  118443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.073480404s)
	I0919 17:28:41.071061  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:41.268207  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:41.369105  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:41.462832  118443 api_server.go:52] waiting for apiserver process to appear ...
	I0919 17:28:41.462917  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:41.481147  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:42.001995  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:42.502329  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:43.001754  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:39.176464  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:41.176698  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:43.178165  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:44.083983  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:46.579741  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:43.502190  118443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 17:28:43.537640  118443 api_server.go:72] duration metric: took 2.074794261s to wait for apiserver process to appear ...
	I0919 17:28:43.537673  118443 api_server.go:88] waiting for apiserver healthz status ...
	I0919 17:28:43.537694  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:43.538701  118443 api_server.go:269] stopped: https://192.168.61.204:8444/healthz: Get "https://192.168.61.204:8444/healthz": dial tcp 192.168.61.204:8444: connect: connection refused
	I0919 17:28:43.538735  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:43.539450  118443 api_server.go:269] stopped: https://192.168.61.204:8444/healthz: Get "https://192.168.61.204:8444/healthz": dial tcp 192.168.61.204:8444: connect: connection refused
	I0919 17:28:44.040176  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:47.633318  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 17:28:47.633365  118443 api_server.go:103] status: https://192.168.61.204:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 17:28:47.633396  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:47.762422  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:28:47.762460  118443 api_server.go:103] status: https://192.168.61.204:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:28:48.039635  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:48.046510  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:28:48.046546  118443 api_server.go:103] status: https://192.168.61.204:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:28:45.675935  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:47.678290  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:48.540540  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:48.546097  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:28:48.546126  118443 api_server.go:103] status: https://192.168.61.204:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:28:49.039636  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:49.046285  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0919 17:28:49.046318  118443 api_server.go:103] status: https://192.168.61.204:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0919 17:28:49.539797  118443 api_server.go:253] Checking apiserver healthz at https://192.168.61.204:8444/healthz ...
	I0919 17:28:49.545316  118443 api_server.go:279] https://192.168.61.204:8444/healthz returned 200:
	ok
	I0919 17:28:49.553786  118443 api_server.go:141] control plane version: v1.28.2
	I0919 17:28:49.553817  118443 api_server.go:131] duration metric: took 6.016135439s to wait for apiserver health ...
	I0919 17:28:49.553830  118443 cni.go:84] Creating CNI manager for ""
	I0919 17:28:49.553846  118443 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0919 17:28:49.555855  118443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 17:28:49.557395  118443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 17:28:49.571005  118443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0919 17:28:49.605653  118443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 17:28:49.621289  118443 system_pods.go:59] 8 kube-system pods found
	I0919 17:28:49.621333  118443 system_pods.go:61] "coredns-5dd5756b68-hm48n" [f9abbb3d-a798-459f-b3f4-1b6bf1c82a82] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 17:28:49.621351  118443 system_pods.go:61] "etcd-default-k8s-diff-port-210669" [718eb099-d29f-46e5-962b-1d4e3939a1e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 17:28:49.621363  118443 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-210669" [25c18896-afb9-4b74-93d4-c02aedec55f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 17:28:49.621376  118443 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-210669" [e4bf3ccc-e1a1-491f-9b99-1b9aaa0b4912] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 17:28:49.621387  118443 system_pods.go:61] "kube-proxy-bn9gt" [c1a9bc44-d380-4835-bc0b-fb37991e7cb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 17:28:49.621405  118443 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-210669" [426de46c-32b9-494e-ab20-267385fe6936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 17:28:49.621453  118443 system_pods.go:61] "metrics-server-57f55c9bc5-jr5n2" [0abd98c1-a3cf-421e-9766-5c620c1960bb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 17:28:49.621475  118443 system_pods.go:61] "storage-provisioner" [6b9f9312-7218-4a11-be3a-c19cb857adc0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 17:28:49.621505  118443 system_pods.go:74] duration metric: took 15.822719ms to wait for pod list to return data ...
	I0919 17:28:49.621523  118443 node_conditions.go:102] verifying NodePressure condition ...
	I0919 17:28:49.627712  118443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0919 17:28:49.627789  118443 node_conditions.go:123] node cpu capacity is 2
	I0919 17:28:49.627829  118443 node_conditions.go:105] duration metric: took 6.300285ms to run NodePressure ...
	I0919 17:28:49.627853  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 17:28:50.152305  118443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0919 17:28:50.168172  118443 kubeadm.go:787] kubelet initialised
	I0919 17:28:50.168274  118443 kubeadm.go:788] duration metric: took 15.936483ms waiting for restarted kubelet to initialise ...
	I0919 17:28:50.168302  118443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:50.175859  118443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.183354  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.183388  118443 pod_ready.go:81] duration metric: took 7.495523ms waiting for pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.183402  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.183412  118443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.197217  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.197255  118443 pod_ready.go:81] duration metric: took 13.831435ms waiting for pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.197269  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.197279  118443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.203882  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.203907  118443 pod_ready.go:81] duration metric: took 6.615526ms waiting for pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.203918  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.203924  118443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.209516  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.209537  118443 pod_ready.go:81] duration metric: took 5.605514ms waiting for pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.209546  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.209554  118443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bn9gt" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.556500  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "kube-proxy-bn9gt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.556532  118443 pod_ready.go:81] duration metric: took 346.969081ms waiting for pod "kube-proxy-bn9gt" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.556546  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "kube-proxy-bn9gt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.556557  118443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:50.956924  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.956955  118443 pod_ready.go:81] duration metric: took 400.387137ms waiting for pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:50.956969  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:50.956978  118443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:51.356618  118443 pod_ready.go:97] node "default-k8s-diff-port-210669" hosting pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:51.356659  118443 pod_ready.go:81] duration metric: took 399.669712ms waiting for pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace to be "Ready" ...
	E0919 17:28:51.356675  118443 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-210669" hosting pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-210669" has status "Ready":"False"
	I0919 17:28:51.356685  118443 pod_ready.go:38] duration metric: took 1.188359872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:51.356709  118443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 17:28:51.370438  118443 ops.go:34] apiserver oom_adj: -16
	I0919 17:28:51.370459  118443 kubeadm.go:640] restartCluster took 21.609612582s
	I0919 17:28:51.370469  118443 kubeadm.go:406] StartCluster complete in 21.642417315s
	I0919 17:28:51.370490  118443 settings.go:142] acquiring lock: {Name:mk5b0472b3a6dd507de44affe9807f6a73f90c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:51.370583  118443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 17:28:51.372402  118443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/kubeconfig: {Name:mkbd16610d1f40f08720849f4f6c1890dee4556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 17:28:51.372696  118443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 17:28:51.372735  118443 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0919 17:28:51.372842  118443 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-210669"
	I0919 17:28:51.372864  118443 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-210669"
	W0919 17:28:51.372875  118443 addons.go:240] addon storage-provisioner should already be in state true
	I0919 17:28:51.372940  118443 host.go:66] Checking if "default-k8s-diff-port-210669" exists ...
	I0919 17:28:51.372971  118443 config.go:182] Loaded profile config "default-k8s-diff-port-210669": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:28:51.373041  118443 cache.go:107] acquiring lock: {Name:mk39dabf87437641a7731807e46502447a060f17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 17:28:51.373128  118443 cache.go:115] /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0919 17:28:51.373141  118443 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 106.171µs
	I0919 17:28:51.373156  118443 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0919 17:28:51.373165  118443 cache.go:87] Successfully saved all images to host disk.
	I0919 17:28:51.373352  118443 config.go:182] Loaded profile config "default-k8s-diff-port-210669": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:28:51.373397  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.373445  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.373456  118443 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-210669"
	I0919 17:28:51.373475  118443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-210669"
	I0919 17:28:51.373667  118443 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-210669"
	I0919 17:28:51.373691  118443 addons.go:231] Setting addon dashboard=true in "default-k8s-diff-port-210669"
	W0919 17:28:51.373699  118443 addons.go:240] addon dashboard should already be in state true
	I0919 17:28:51.373745  118443 host.go:66] Checking if "default-k8s-diff-port-210669" exists ...
	I0919 17:28:51.373756  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.373784  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.373954  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.374093  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.374036  118443 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-210669"
	I0919 17:28:51.374135  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.374153  118443 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-210669"
	W0919 17:28:51.374167  118443 addons.go:240] addon metrics-server should already be in state true
	I0919 17:28:51.374173  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.374212  118443 host.go:66] Checking if "default-k8s-diff-port-210669" exists ...
	I0919 17:28:51.374538  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.374570  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.390116  118443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-210669" context rescaled to 1 replicas
	I0919 17:28:51.390169  118443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.204 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0919 17:28:51.392194  118443 out.go:177] * Verifying Kubernetes components...
	I0919 17:28:51.393721  118443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 17:28:51.392465  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I0919 17:28:51.392480  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0919 17:28:51.392490  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33327
	I0919 17:28:51.392502  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0919 17:28:51.392830  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
	I0919 17:28:51.394283  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.394327  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.394384  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.394845  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.394869  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.394848  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.394936  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.394937  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.394952  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.394995  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.395276  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.395319  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.395355  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.395412  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.395449  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.395459  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.395934  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.395966  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.395997  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.396029  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.396186  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.396248  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.396321  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.396341  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.396513  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.397021  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.397690  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.397721  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.400872  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.400900  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.414672  118443 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-210669"
	W0919 17:28:51.414699  118443 addons.go:240] addon default-storageclass should already be in state true
	I0919 17:28:51.414732  118443 host.go:66] Checking if "default-k8s-diff-port-210669" exists ...
	I0919 17:28:51.415113  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.415139  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.416646  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0919 17:28:51.417143  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.417725  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.417746  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.418960  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.419026  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0919 17:28:51.419353  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.419486  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.419912  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.419928  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.421055  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.421264  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:51.421327  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:51.421467  118443 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0919 17:28:51.421486  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:51.423582  118443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 17:28:51.425202  118443 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:51.425220  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 17:28:51.425239  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:51.424735  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.425373  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:51.425403  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.425438  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:51.426096  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:51.426448  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:51.426752  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:51.428578  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.428949  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:51.428985  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.429214  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:51.429388  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:51.429496  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:51.429598  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:51.434089  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I0919 17:28:51.434784  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.435232  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.435253  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.435502  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.435630  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.437206  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36371
	I0919 17:28:51.437408  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:51.439687  118443 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 17:28:51.437996  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0919 17:28:51.438899  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.442427  118443 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0919 17:28:51.441494  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.441573  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.443752  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.443754  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 17:28:51.443772  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 17:28:51.443798  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:51.444150  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.444289  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.444302  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.444638  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.444837  118443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:28:51.444869  118443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:28:51.444872  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.447338  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.447928  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:51.447970  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.448113  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:51.448185  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:51.448416  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:51.450148  118443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 17:28:48.579913  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:50.581330  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:51.448583  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:51.451677  118443 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 17:28:51.451689  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 17:28:51.451708  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:51.453818  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:51.454868  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.454902  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:51.454926  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.454940  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:51.455087  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:51.455259  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:51.455376  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:51.495905  118443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0919 17:28:51.496354  118443 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:28:51.496905  118443 main.go:141] libmachine: Using API Version  1
	I0919 17:28:51.496936  118443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:28:51.497450  118443 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:28:51.497679  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetState
	I0919 17:28:51.499561  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .DriverName
	I0919 17:28:51.499865  118443 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:51.499882  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 17:28:51.499903  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHHostname
	I0919 17:28:51.503205  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.503858  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:95:3e", ip: ""} in network mk-default-k8s-diff-port-210669: {Iface:virbr3 ExpiryTime:2023-09-19 18:28:13 +0000 UTC Type:0 Mac:52:54:00:76:95:3e Iaid: IPaddr:192.168.61.204 Prefix:24 Hostname:default-k8s-diff-port-210669 Clientid:01:52:54:00:76:95:3e}
	I0919 17:28:51.503967  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | domain default-k8s-diff-port-210669 has defined IP address 192.168.61.204 and MAC address 52:54:00:76:95:3e in network mk-default-k8s-diff-port-210669
	I0919 17:28:51.504265  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHPort
	I0919 17:28:51.504517  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHKeyPath
	I0919 17:28:51.504743  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .GetSSHUsername
	I0919 17:28:51.504971  118443 sshutil.go:53] new ssh client: &{IP:192.168.61.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/default-k8s-diff-port-210669/id_rsa Username:docker}
	I0919 17:28:51.665942  118443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 17:28:51.706580  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 17:28:51.706609  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 17:28:51.707068  118443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 17:28:51.722522  118443 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 17:28:51.722547  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 17:28:51.859194  118443 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 17:28:51.859233  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 17:28:51.869654  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 17:28:51.869678  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 17:28:51.953907  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 17:28:51.953951  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 17:28:52.023505  118443 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:28:52.023534  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 17:28:52.070072  118443 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0919 17:28:52.070161  118443 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-210669" to be "Ready" ...
	I0919 17:28:52.070174  118443 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0919 17:28:52.070319  118443 cache_images.go:84] Images are preloaded, skipping loading
	I0919 17:28:52.070330  118443 cache_images.go:262] succeeded pushing to: default-k8s-diff-port-210669
	I0919 17:28:52.070338  118443 cache_images.go:263] failed pushing to: 
	I0919 17:28:52.070366  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:52.070383  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:52.070694  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:52.070714  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:52.070725  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:52.070736  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:52.070758  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:52.071002  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:52.071020  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:52.071058  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:52.152210  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 17:28:52.152230  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 17:28:52.154696  118443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 17:28:52.256368  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 17:28:52.256392  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 17:28:52.359618  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 17:28:52.359653  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 17:28:52.453105  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 17:28:52.453136  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 17:28:52.500700  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 17:28:52.500731  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 17:28:52.522994  118443 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:52.523030  118443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 17:28:52.540873  118443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 17:28:50.180216  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:52.677647  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:54.084967  118443 node_ready.go:49] node "default-k8s-diff-port-210669" has status "Ready":"True"
	I0919 17:28:54.084997  118443 node_ready.go:38] duration metric: took 2.01480176s waiting for node "default-k8s-diff-port-210669" to be "Ready" ...
	I0919 17:28:54.085010  118443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 17:28:54.095501  118443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:54.120256  118443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.413145071s)
	I0919 17:28:54.120316  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120330  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.120339  118443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.965610982s)
	I0919 17:28:54.120407  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120428  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.120562  118443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.454584869s)
	I0919 17:28:54.120615  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120630  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.120666  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.120687  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.120697  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120706  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.120771  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.120804  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.120818  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.120833  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120843  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.120898  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.120934  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.120946  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.120957  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.122658  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.122668  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.122679  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.122692  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.122704  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.122713  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.122760  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.122777  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.122786  118443 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-210669"
	I0919 17:28:54.122896  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.122916  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.122952  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.123026  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.123041  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.644188  118443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.103260789s)
	I0919 17:28:54.644263  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.644278  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.644661  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.646406  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.646455  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.646472  118443 main.go:141] libmachine: Making call to close driver server
	I0919 17:28:54.646483  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) Calling .Close
	I0919 17:28:54.646861  118443 main.go:141] libmachine: Successfully made call to close driver server
	I0919 17:28:54.646881  118443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 17:28:54.646923  118443 main.go:141] libmachine: (default-k8s-diff-port-210669) DBG | Closing plugin on server side
	I0919 17:28:54.649077  118443 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-210669 addons enable metrics-server	
	
	
	I0919 17:28:54.650838  118443 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0919 17:28:53.080882  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:55.578731  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:54.652877  118443 addons.go:502] enable addons completed in 3.280150786s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0919 17:28:56.114952  118443 pod_ready.go:102] pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:55.177574  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:57.178557  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:57.581227  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:00.079314  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:58.124298  118443 pod_ready.go:102] pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace has status "Ready":"False"
	I0919 17:28:58.616945  118443 pod_ready.go:92] pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:58.616980  118443 pod_ready.go:81] duration metric: took 4.521447155s waiting for pod "coredns-5dd5756b68-hm48n" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:58.616992  118443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:58.624207  118443 pod_ready.go:92] pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:58.624232  118443 pod_ready.go:81] duration metric: took 7.231235ms waiting for pod "etcd-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:58.624244  118443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:58.630475  118443 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace has status "Ready":"True"
	I0919 17:28:58.630494  118443 pod_ready.go:81] duration metric: took 6.243921ms waiting for pod "kube-apiserver-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:58.630504  118443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:00.765133  118443 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:02.764049  118443 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace has status "Ready":"True"
	I0919 17:29:02.764071  118443 pod_ready.go:81] duration metric: took 4.133559206s waiting for pod "kube-controller-manager-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:02.764081  118443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bn9gt" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:02.769557  118443 pod_ready.go:92] pod "kube-proxy-bn9gt" in "kube-system" namespace has status "Ready":"True"
	I0919 17:29:02.769584  118443 pod_ready.go:81] duration metric: took 5.496024ms waiting for pod "kube-proxy-bn9gt" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:02.769596  118443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:02.775289  118443 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace has status "Ready":"True"
	I0919 17:29:02.775316  118443 pod_ready.go:81] duration metric: took 5.708368ms waiting for pod "kube-scheduler-default-k8s-diff-port-210669" in "kube-system" namespace to be "Ready" ...
	I0919 17:29:02.775329  118443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace to be "Ready" ...
	I0919 17:28:59.676920  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:01.678022  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:02.079519  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:04.580529  117576 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vxx4h" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:05.065578  118443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:07.564466  118443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jr5n2" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:03.681568  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:05.684737  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	I0919 17:29:08.177666  117954 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rnjvj" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-09-19 17:27:46 UTC, ends at Tue 2023-09-19 17:29:09 UTC. --
	Sep 19 17:28:56 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:28:56.546177976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:28:56 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:28:56.546338666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.347355633Z" level=info msg="shim disconnected" id=8a06172dd6c6dd90d8e35fe9eb6ef0156a0503f8219caa16364562f054c5e494 namespace=moby
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.347440084Z" level=warning msg="cleaning up after shim disconnected" id=8a06172dd6c6dd90d8e35fe9eb6ef0156a0503f8219caa16364562f054c5e494 namespace=moby
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.347451851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:02.349873492Z" level=info msg="ignoring event" container=8a06172dd6c6dd90d8e35fe9eb6ef0156a0503f8219caa16364562f054c5e494 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:02.553073650Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:02.553204525Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:02.563923434Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.627940589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.628063563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.628091908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:29:02 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:02.628106374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:29:03 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:03.088895837Z" level=info msg="shim disconnected" id=4c307d6cbf93c1b3c8daa374a70dc3c9c5babe1b6b864fc3d4738a0c5f14df91 namespace=moby
	Sep 19 17:29:03 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:03.089315374Z" level=warning msg="cleaning up after shim disconnected" id=4c307d6cbf93c1b3c8daa374a70dc3c9c5babe1b6b864fc3d4738a0c5f14df91 namespace=moby
	Sep 19 17:29:03 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:03.089488815Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 17:29:03 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:03.091207485Z" level=info msg="ignoring event" container=4c307d6cbf93c1b3c8daa374a70dc3c9c5babe1b6b864fc3d4738a0c5f14df91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.183598769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.183682124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.183740543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.183754584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.618161929Z" level=info msg="shim disconnected" id=b4efcb1aca642cd7e59d19f179ada700665036695713a4da910f1be5473d8d4e namespace=moby
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.618241980Z" level=warning msg="cleaning up after shim disconnected" id=b4efcb1aca642cd7e59d19f179ada700665036695713a4da910f1be5473d8d4e namespace=moby
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1089]: time="2023-09-19T17:29:04.618253889Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 19 17:29:04 old-k8s-version-367105 dockerd[1083]: time="2023-09-19T17:29:04.618176316Z" level=info msg="ignoring event" container=b4efcb1aca642cd7e59d19f179ada700665036695713a4da910f1be5473d8d4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* time="2023-09-19T17:29:09Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                         COMMAND                  CREATED          STATUS                            PORTS     NAMES
	b4efcb1aca64   a90209bb39e3                  "nginx -g 'daemon of…"   5 seconds ago    Exited (1) 5 seconds ago                    k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard_3c423704-309c-49a7-a7c6-05da5bec0ef8_1
	bee6d085e6d4   kubernetesui/dashboard        "/dashboard --insecu…"   13 seconds ago   Up 13 seconds                               k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-2dlmj_kubernetes-dashboard_572073a8-dd00-4043-a32f-1cf26ef4170d_0
	50c3df2c3169   k8s.gcr.io/pause:3.1          "/pause"                 22 seconds ago   Up 21 seconds                               k8s_POD_kubernetes-dashboard-84b68f675b-2dlmj_kubernetes-dashboard_572073a8-dd00-4043-a32f-1cf26ef4170d_0
	50cea5346835   k8s.gcr.io/pause:3.1          "/pause"                 22 seconds ago   Up 21 seconds                               k8s_POD_dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard_3c423704-309c-49a7-a7c6-05da5bec0ef8_0
	fa8d4d78db64   k8s.gcr.io/pause:3.1          "/pause"                 23 seconds ago   Up 22 seconds                               k8s_POD_metrics-server-74d5856cc6-87v9h_kube-system_2ad723ef-9de6-427f-bbd4-57c97d0303ea_0
	e9a50232654e   c21b0c7400f9                  "/usr/local/bin/kube…"   37 seconds ago   Up 37 seconds                               k8s_kube-proxy_kube-proxy-r2vs7_kube-system_13a9fcc3-1efb-4196-939b-8e97458c58a2_1
	0c9eee0ab479   k8s.gcr.io/pause:3.1          "/pause"                 37 seconds ago   Up 37 seconds                               k8s_POD_kube-proxy-r2vs7_kube-system_13a9fcc3-1efb-4196-939b-8e97458c58a2_1
	18edf282380e   bf261d157914                  "/coredns -conf /etc…"   37 seconds ago   Up 37 seconds                               k8s_coredns_coredns-5644d7b6d9-wjqc6_kube-system_92117877-e0fe-4d40-9bce-aaadfa89e39b_1
	8a06172dd6c6   6e38f40d628d                  "/storage-provisioner"   38 seconds ago   Exited (1) 7 seconds ago                    k8s_storage-provisioner_storage-provisioner_kube-system_c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12_1
	350d3135995b   k8s.gcr.io/pause:3.1          "/pause"                 38 seconds ago   Up 37 seconds                               k8s_POD_storage-provisioner_kube-system_c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12_1
	92ad9ed449da   k8s.gcr.io/pause:3.1          "/pause"                 38 seconds ago   Up 37 seconds                               k8s_POD_coredns-5644d7b6d9-wjqc6_kube-system_92117877-e0fe-4d40-9bce-aaadfa89e39b_1
	9991c5842749   56cc512116c8                  "sleep 3600"             39 seconds ago   Up 38 seconds                               k8s_busybox_busybox_default_9d284d5d-1f8d-4e81-ae0e-a092ce1f7950_1
	07b3af464c0f   k8s.gcr.io/pause:3.1          "/pause"                 39 seconds ago   Up 39 seconds                               k8s_POD_busybox_default_9d284d5d-1f8d-4e81-ae0e-a092ce1f7950_1
	a7bb28498df1   06a629a7e51c                  "kube-controller-man…"   46 seconds ago   Up 45 seconds                               k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-367105_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	1b9652844e8e   301ddc62b80b                  "kube-scheduler --au…"   46 seconds ago   Up 45 seconds                               k8s_kube-scheduler_kube-scheduler-old-k8s-version-367105_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	231452e54133   b305571ca60a                  "kube-apiserver --ad…"   46 seconds ago   Up 45 seconds                               k8s_kube-apiserver_kube-apiserver-old-k8s-version-367105_kube-system_4e26db56db12b6650bcead1515de7be6_1
	fa0eb7badb58   b2756210eeab                  "etcd --advertise-cl…"   46 seconds ago   Up 45 seconds                               k8s_etcd_etcd-old-k8s-version-367105_kube-system_657d8cfd9b210e1ef5e31ee3255f7194_1
	977cde5aa578   k8s.gcr.io/pause:3.1          "/pause"                 46 seconds ago   Up 46 seconds                               k8s_POD_kube-scheduler-old-k8s-version-367105_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_1
	fc8b5aa19660   k8s.gcr.io/pause:3.1          "/pause"                 46 seconds ago   Up 45 seconds                               k8s_POD_kube-controller-manager-old-k8s-version-367105_kube-system_b39706a67360d65bfa3cf2560791efe9_0
	f1f796bd593c   k8s.gcr.io/pause:3.1          "/pause"                 46 seconds ago   Up 46 seconds                               k8s_POD_kube-apiserver-old-k8s-version-367105_kube-system_4e26db56db12b6650bcead1515de7be6_1
	bbbc795a620f   k8s.gcr.io/pause:3.1          "/pause"                 46 seconds ago   Up 46 seconds                               k8s_POD_etcd-old-k8s-version-367105_kube-system_657d8cfd9b210e1ef5e31ee3255f7194_1
	5aeefed6adce   gcr.io/k8s-minikube/busybox   "sleep 3600"             2 minutes ago    Exited (137) About a minute ago             k8s_busybox_busybox_default_9d284d5d-1f8d-4e81-ae0e-a092ce1f7950_0
	29d25b5f2652   k8s.gcr.io/pause:3.1          "/pause"                 2 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_busybox_default_9d284d5d-1f8d-4e81-ae0e-a092ce1f7950_0
	957eb4fc8e8a   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_storage-provisioner_kube-system_c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12_0
	08335c189fc9   bf261d157914                  "/coredns -conf /etc…"   3 minutes ago    Exited (0) 2 minutes ago                    k8s_coredns_coredns-5644d7b6d9-wjqc6_kube-system_92117877-e0fe-4d40-9bce-aaadfa89e39b_0
	9fa1901a5efd   c21b0c7400f9                  "/usr/local/bin/kube…"   3 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-proxy_kube-proxy-r2vs7_kube-system_13a9fcc3-1efb-4196-939b-8e97458c58a2_0
	03f76c90629d   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_coredns-5644d7b6d9-wjqc6_kube-system_92117877-e0fe-4d40-9bce-aaadfa89e39b_0
	5a83916c0cd9   k8s.gcr.io/pause:3.1          "/pause"                 3 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-proxy-r2vs7_kube-system_13a9fcc3-1efb-4196-939b-8e97458c58a2_0
	74289851f17e   b2756210eeab                  "etcd --advertise-cl…"   4 minutes ago    Exited (0) 2 minutes ago                    k8s_etcd_etcd-old-k8s-version-367105_kube-system_657d8cfd9b210e1ef5e31ee3255f7194_0
	66c34702bde2   b305571ca60a                  "kube-apiserver --ad…"   4 minutes ago    Exited (137) About a minute ago             k8s_kube-apiserver_kube-apiserver-old-k8s-version-367105_kube-system_4e26db56db12b6650bcead1515de7be6_0
	519a593bc277   06a629a7e51c                  "kube-controller-man…"   4 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-367105_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	055183b8db4b   301ddc62b80b                  "kube-scheduler --au…"   4 minutes ago    Exited (2) 2 minutes ago                    k8s_kube-scheduler_kube-scheduler-old-k8s-version-367105_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	5cfad067cf99   k8s.gcr.io/pause:3.1          "/pause"                 4 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_etcd-old-k8s-version-367105_kube-system_657d8cfd9b210e1ef5e31ee3255f7194_0
	d6a66ee0e63c   k8s.gcr.io/pause:3.1          "/pause"                 4 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-scheduler-old-k8s-version-367105_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	e98675b0244a   k8s.gcr.io/pause:3.1          "/pause"                 4 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-controller-manager-old-k8s-version-367105_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	51816ee18c21   k8s.gcr.io/pause:3.1          "/pause"                 4 minutes ago    Exited (0) 2 minutes ago                    k8s_POD_kube-apiserver-old-k8s-version-367105_kube-system_4e26db56db12b6650bcead1515de7be6_0
	
	* 
	* ==> coredns [08335c189fc9] <==
	* E0919 17:26:06.568187       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:26:06.568187       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0919 17:26:06.568301       1 trace.go:82] Trace[195386344]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-19 17:25:36.564956261 +0000 UTC m=+0.041186226) (total time: 30.003323829s):
	Trace[195386344]: [30.003323829s] [30.003323829s] END
	E0919 17:26:06.568517       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:26:06.568517       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:26:06.568517       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	2023-09-19T17:26:09.707Z [INFO] plugin/reload: Running configuration MD5 = 6d61b2f41ed11e6ad276aa627263dbc3
	[INFO] Reloading complete
	E0919 17:27:02.961500       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=471&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961500       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=471&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961500       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=471&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961680       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=472&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961680       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=472&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961680       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=472&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961778       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961778       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961778       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	E0919 17:26:06.568187       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:26:06.568517       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:27:02.961500       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=471&timeout=6m16s&timeoutSeconds=376&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961680       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=472&timeout=6m42s&timeoutSeconds=402&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	E0919 17:27:02.961778       1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=146&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [18edf282380e] <==
	* E0919 17:29:02.282259       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.282634       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.283183       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2023-09-19T17:28:37.282Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-19T17:28:37.311Z [INFO] 127.0.0.1:59126 - 50190 "HINFO IN 1407868418113453327.8154579726131794890. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030461121s
	2023-09-19T17:28:41.972Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:28:51.972Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2023-09-19T17:29:01.972Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I0919 17:29:02.281767       1 trace.go:82] Trace[1102417937]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-19 17:28:32.280625398 +0000 UTC m=+0.031270112) (total time: 30.001074463s):
	Trace[1102417937]: [30.001074463s] [30.001074463s] END
	E0919 17:29:02.282259       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.282259       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.282259       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0919 17:29:02.282615       1 trace.go:82] Trace[1742179394]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-19 17:28:32.281571431 +0000 UTC m=+0.032216156) (total time: 30.001017355s):
	Trace[1742179394]: [30.001017355s] [30.001017355s] END
	E0919 17:29:02.282634       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.282634       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.282634       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0919 17:29:02.283143       1 trace.go:82] Trace[279116123]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-19 17:28:32.282641627 +0000 UTC m=+0.033286338) (total time: 30.000477775s):
	Trace[279116123]: [30.000477775s] [30.000477775s] END
	E0919 17:29:02.283183       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.283183       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0919 17:29:02.283183       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-367105
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-367105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d69d3d50d3fb420e04057e6545e9fd90e260986
	                    minikube.k8s.io/name=old-k8s-version-367105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_19T17_25_20_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 19 Sep 2023 17:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 19 Sep 2023 17:28:30 +0000   Tue, 19 Sep 2023 17:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 19 Sep 2023 17:28:30 +0000   Tue, 19 Sep 2023 17:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 19 Sep 2023 17:28:30 +0000   Tue, 19 Sep 2023 17:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 19 Sep 2023 17:28:30 +0000   Tue, 19 Sep 2023 17:25:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.162
	  Hostname:    old-k8s-version-367105
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 307e7aa516ac4461b3572993b000ca61
	 System UUID:                307e7aa5-16ac-4461-b357-2993b000ca61
	 Boot ID:                    cc6c30cf-64ae-4f27-bb88-d24a49a34d6f
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                coredns-5644d7b6d9-wjqc6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m35s
	  kube-system                etcd-old-k8s-version-367105                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                kube-apiserver-old-k8s-version-367105             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                kube-controller-manager-old-k8s-version-367105    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                kube-proxy-r2vs7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                kube-scheduler-old-k8s-version-367105             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                metrics-server-74d5856cc6-87v9h                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         23s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-h9lt5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-2dlmj             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m2s)  kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m2s)  kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m2s)  kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m33s                kube-proxy, old-k8s-version-367105  Starting kube-proxy.
	  Normal  Starting                 47s                  kubelet, old-k8s-version-367105     Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)    kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)    kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)    kubelet, old-k8s-version-367105     Node old-k8s-version-367105 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  47s                  kubelet, old-k8s-version-367105     Updated Node Allocatable limit across pods
	  Normal  Starting                 37s                  kube-proxy, old-k8s-version-367105  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep19 17:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077521] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.449306] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.486463] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.167386] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.521318] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.506981] systemd-fstab-generator[516]: Ignoring "noauto" for root device
	[  +0.125408] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +1.337732] systemd-fstab-generator[795]: Ignoring "noauto" for root device
	[Sep19 17:28] systemd-fstab-generator[832]: Ignoring "noauto" for root device
	[  +0.140650] systemd-fstab-generator[843]: Ignoring "noauto" for root device
	[  +0.155181] systemd-fstab-generator[856]: Ignoring "noauto" for root device
	[  +6.308028] systemd-fstab-generator[1074]: Ignoring "noauto" for root device
	[  +1.934612] kauditd_printk_skb: 67 callbacks suppressed
	[ +12.889148] systemd-fstab-generator[1490]: Ignoring "noauto" for root device
	[  +0.469856] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.219836] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.119089] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [74289851f17e] <==
	* 2023-09-19 17:25:11.160802 I | raft: d8ad6e5e27c86e8e became follower at term 1
	2023-09-19 17:25:11.171835 W | auth: simple token is not cryptographically signed
	2023-09-19 17:25:11.178337 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-19 17:25:11.181806 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 17:25:11.182816 I | embed: listening for metrics on http://192.168.83.162:2381
	2023-09-19 17:25:11.183194 I | etcdserver: d8ad6e5e27c86e8e as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-19 17:25:11.183990 I | etcdserver/membership: added member d8ad6e5e27c86e8e [https://192.168.83.162:2380] to cluster 97a2540c3ecc9ce4
	2023-09-19 17:25:11.184573 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-19 17:25:11.562334 I | raft: d8ad6e5e27c86e8e is starting a new election at term 1
	2023-09-19 17:25:11.571160 I | raft: d8ad6e5e27c86e8e became candidate at term 2
	2023-09-19 17:25:11.571188 I | raft: d8ad6e5e27c86e8e received MsgVoteResp from d8ad6e5e27c86e8e at term 2
	2023-09-19 17:25:11.571202 I | raft: d8ad6e5e27c86e8e became leader at term 2
	2023-09-19 17:25:11.571209 I | raft: raft.node: d8ad6e5e27c86e8e elected leader d8ad6e5e27c86e8e at term 2
	2023-09-19 17:25:11.571475 I | etcdserver: published {Name:old-k8s-version-367105 ClientURLs:[https://192.168.83.162:2379]} to cluster 97a2540c3ecc9ce4
	2023-09-19 17:25:11.571698 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-19 17:25:11.571776 I | embed: ready to serve client requests
	2023-09-19 17:25:11.573355 I | embed: serving client requests on 192.168.83.162:2379
	2023-09-19 17:25:11.573442 I | embed: ready to serve client requests
	2023-09-19 17:25:11.574853 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 17:25:11.593351 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-19 17:25:11.593615 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-19 17:25:32.491440 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" " with result "range_response_count:1 size:214" took too long (177.375499ms) to execute
	2023-09-19 17:25:38.635474 W | etcdserver: request "header:<ID:7966457272831933619 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/old-k8s-version-367105\" mod_revision:254 > success:<request_put:<key:\"/registry/leases/kube-node-lease/old-k8s-version-367105\" value_size:267 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/old-k8s-version-367105\" > >>" with result "size:16" took too long (130.952306ms) to execute
	2023-09-19 17:27:03.053899 N | pkg/osutil: received terminated signal, shutting down...
	2023-09-19 17:27:03.066632 I | etcdserver: skipped leadership transfer for single member cluster
	
	* 
	* ==> etcd [fa0eb7badb58] <==
	* 2023-09-19 17:28:24.957371 I | etcdserver: heartbeat = 100ms
	2023-09-19 17:28:24.957381 I | etcdserver: election = 1000ms
	2023-09-19 17:28:24.957512 I | etcdserver: snapshot count = 10000
	2023-09-19 17:28:24.957537 I | etcdserver: advertise client URLs = https://192.168.83.162:2379
	2023-09-19 17:28:24.962610 I | etcdserver: restarting member d8ad6e5e27c86e8e in cluster 97a2540c3ecc9ce4 at commit index 513
	2023-09-19 17:28:24.963351 I | raft: d8ad6e5e27c86e8e became follower at term 2
	2023-09-19 17:28:24.963452 I | raft: newRaft d8ad6e5e27c86e8e [peers: [], term: 2, commit: 513, applied: 0, lastindex: 513, lastterm: 2]
	2023-09-19 17:28:25.016700 W | auth: simple token is not cryptographically signed
	2023-09-19 17:28:25.022787 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-19 17:28:25.024180 I | etcdserver/membership: added member d8ad6e5e27c86e8e [https://192.168.83.162:2380] to cluster 97a2540c3ecc9ce4
	2023-09-19 17:28:25.024433 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-19 17:28:25.024548 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-19 17:28:25.037128 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-19 17:28:25.040457 I | embed: listening for metrics on http://192.168.83.162:2381
	2023-09-19 17:28:25.040709 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-19 17:28:26.364050 I | raft: d8ad6e5e27c86e8e is starting a new election at term 2
	2023-09-19 17:28:26.364107 I | raft: d8ad6e5e27c86e8e became candidate at term 3
	2023-09-19 17:28:26.364120 I | raft: d8ad6e5e27c86e8e received MsgVoteResp from d8ad6e5e27c86e8e at term 3
	2023-09-19 17:28:26.364130 I | raft: d8ad6e5e27c86e8e became leader at term 3
	2023-09-19 17:28:26.364135 I | raft: raft.node: d8ad6e5e27c86e8e elected leader d8ad6e5e27c86e8e at term 3
	2023-09-19 17:28:26.365777 I | etcdserver: published {Name:old-k8s-version-367105 ClientURLs:[https://192.168.83.162:2379]} to cluster 97a2540c3ecc9ce4
	2023-09-19 17:28:26.366329 I | embed: ready to serve client requests
	2023-09-19 17:28:26.367425 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-19 17:28:26.367526 I | embed: ready to serve client requests
	2023-09-19 17:28:26.368311 I | embed: serving client requests on 192.168.83.162:2379
	
	* 
	* ==> kernel <==
	*  17:29:10 up 1 min,  0 users,  load average: 1.46, 0.54, 0.19
	Linux old-k8s-version-367105 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [231452e54133] <==
	* I0919 17:28:29.787904       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0919 17:28:29.788404       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 17:28:29.828549       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	E0919 17:28:29.832408       1 controller.go:154] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0919 17:28:29.895748       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0919 17:28:29.939959       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0919 17:28:29.941261       1 cache.go:39] Caches are synced for autoregister controller
	I0919 17:28:29.942258       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 17:28:30.703997       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0919 17:28:30.721292       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0919 17:28:30.883861       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0919 17:28:30.883908       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0919 17:28:31.464528       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0919 17:28:31.486311       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0919 17:28:31.576570       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0919 17:28:31.592300       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 17:28:31.603691       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 17:28:31.884111       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0919 17:28:31.884466       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 17:28:31.886170       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0919 17:28:31.886307       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 17:28:46.545623       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0919 17:28:46.554096       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0919 17:28:46.566317       1 controller.go:606] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [66c34702bde2] <==
	* W0919 17:27:12.216654       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.219636       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.230406       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.308294       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.326224       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.342504       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.344972       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.399618       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.404253       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.502835       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.533351       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.655870       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.678238       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.697430       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.702110       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.812737       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.843006       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.868202       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.877483       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.916989       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.934150       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:12.948782       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:13.014709       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:13.020013       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0919 17:27:13.049762       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [519a593bc277] <==
	* W0919 17:25:34.863751       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-367105. Assuming now as a timestamp.
	I0919 17:25:34.863908       1 node_lifecycle_controller.go:1108] Controller detected that zone  is now in state Normal.
	I0919 17:25:34.869937       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-367105", UID:"62fc1f3b-0a37-4a4f-9b35-fffff38d02fd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-367105 event: Registered Node old-k8s-version-367105 in Controller
	I0919 17:25:34.870305       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0919 17:25:34.878432       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0919 17:25:34.904693       1 shared_informer.go:204] Caches are synced for resource quota 
	I0919 17:25:34.906188       1 shared_informer.go:204] Caches are synced for attach detach 
	I0919 17:25:34.906368       1 shared_informer.go:204] Caches are synced for node 
	I0919 17:25:34.906379       1 range_allocator.go:172] Starting range CIDR allocator
	I0919 17:25:34.906394       1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
	I0919 17:25:34.907797       1 shared_informer.go:204] Caches are synced for resource quota 
	I0919 17:25:34.946474       1 shared_informer.go:204] Caches are synced for disruption 
	I0919 17:25:34.946524       1 disruption.go:341] Sending events to api server.
	I0919 17:25:34.946779       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"568808e0-ed06-4526-8531-2f76f74182a4", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-r2vs7
	I0919 17:25:34.957310       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0919 17:25:34.959021       1 shared_informer.go:204] Caches are synced for deployment 
	I0919 17:25:34.960381       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:25:34.965859       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:25:34.965910       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0919 17:25:35.009889       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0919 17:25:35.020880       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"609f88aa-433a-44eb-8420-23e5fe3c18a0", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-wjqc6
	I0919 17:25:35.020949       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"100f0506-ae46-49b1-a567-1750b0e76db0", APIVersion:"apps/v1", ResourceVersion:"313", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0919 17:25:35.044732       1 range_allocator.go:359] Set node old-k8s-version-367105 PodCIDR to [10.244.0.0/24]
	I0919 17:27:01.905960       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"4a00f089-0463-4aa7-bfcc-023f9b6751e5", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-74d5856cc6 to 1
	E0919 17:27:01.988016       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	* 
	* ==> kube-controller-manager [a7bb28498df1] <==
	* I0919 17:28:46.645486       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"cf51db6f-c757-48ba-a6c3-6b6b5336a3b6", APIVersion:"apps/v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-2dlmj
	I0919 17:28:46.679256       1 shared_informer.go:204] Caches are synced for taint 
	I0919 17:28:46.679574       1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone: 
	W0919 17:28:46.679708       1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-367105. Assuming now as a timestamp.
	I0919 17:28:46.679854       1 node_lifecycle_controller.go:1108] Controller detected that zone  is now in state Normal.
	I0919 17:28:46.679692       1 taint_manager.go:186] Starting NoExecuteTaintManager
	I0919 17:28:46.680188       1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-367105", UID:"62fc1f3b-0a37-4a4f-9b35-fffff38d02fd", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-367105 event: Registered Node old-k8s-version-367105 in Controller
	I0919 17:28:46.779144       1 shared_informer.go:204] Caches are synced for PVC protection 
	I0919 17:28:46.791363       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0919 17:28:46.798999       1 shared_informer.go:204] Caches are synced for expand 
	I0919 17:28:46.927509       1 shared_informer.go:204] Caches are synced for attach detach 
	I0919 17:28:47.023623       1 shared_informer.go:204] Caches are synced for resource quota 
	I0919 17:28:47.027893       1 shared_informer.go:204] Caches are synced for disruption 
	I0919 17:28:47.027908       1 disruption.go:341] Sending events to api server.
	I0919 17:28:47.027977       1 shared_informer.go:204] Caches are synced for stateful set 
	I0919 17:28:47.041400       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:28:47.043927       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0919 17:28:47.063569       1 shared_informer.go:204] Caches are synced for ReplicationController 
	I0919 17:28:47.086616       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0919 17:28:47.086671       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0919 17:28:47.338512       1 memcache.go:199] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0919 17:28:47.388630       1 memcache.go:111] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0919 17:28:48.171581       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0919 17:28:48.171668       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I0919 17:28:48.272754       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [9fa1901a5efd] <==
	* W0919 17:25:36.970251       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0919 17:25:36.987566       1 node.go:135] Successfully retrieved node IP: 192.168.83.162
	I0919 17:25:36.987851       1 server_others.go:149] Using iptables Proxier.
	I0919 17:25:36.988261       1 server.go:529] Version: v1.16.0
	I0919 17:25:36.991884       1 config.go:313] Starting service config controller
	I0919 17:25:36.991940       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0919 17:25:36.991957       1 config.go:131] Starting endpoints config controller
	I0919 17:25:36.991967       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0919 17:25:37.092219       1 shared_informer.go:204] Caches are synced for service config 
	I0919 17:25:37.092218       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-proxy [e9a50232654e] <==
	* W0919 17:28:32.687771       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0919 17:28:32.700730       1 node.go:135] Successfully retrieved node IP: 192.168.83.162
	I0919 17:28:32.701459       1 server_others.go:149] Using iptables Proxier.
	I0919 17:28:32.703556       1 server.go:529] Version: v1.16.0
	I0919 17:28:32.711174       1 config.go:313] Starting service config controller
	I0919 17:28:32.720907       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0919 17:28:32.713153       1 config.go:131] Starting endpoints config controller
	I0919 17:28:32.723987       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0919 17:28:32.821375       1 shared_informer.go:204] Caches are synced for service config 
	I0919 17:28:32.824537       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [055183b8db4b] <==
	* E0919 17:25:15.206657       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:25:15.206709       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:25:15.206757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:25:16.204941       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 17:25:16.207656       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 17:25:16.209111       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 17:25:16.211558       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 17:25:16.215472       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 17:25:16.216375       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 17:25:16.217559       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 17:25:16.220887       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 17:25:16.222415       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 17:25:16.223979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 17:25:16.225724       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 17:27:02.960288       1 reflector.go:280] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to watch *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&resourceVersion=444&timeoutSeconds=382&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.960423       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: Get https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961476       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=471&timeout=6m47s&timeoutSeconds=407&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961532       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=454&timeout=9m31s&timeoutSeconds=571&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961565       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=5m29s&timeoutSeconds=329&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961638       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=402&timeout=9m28s&timeoutSeconds=568&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961687       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=341&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961760       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: Get https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m50s&timeoutSeconds=350&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.961984       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.963200       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=1&timeout=9m59s&timeoutSeconds=599&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	E0919 17:27:02.968558       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=5m19s&timeoutSeconds=319&watch=true: dial tcp 192.168.83.162:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [1b9652844e8e] <==
	* I0919 17:28:25.128168       1 serving.go:319] Generated self-signed cert in-memory
	I0919 17:28:29.856751       1 server.go:143] Version: v1.16.0
	I0919 17:28:29.857043       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0919 17:28:29.860066       1 authorization.go:47] Authorization is disabled
	W0919 17:28:29.860103       1 authentication.go:79] Authentication is disabled
	I0919 17:28:29.860114       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0919 17:28:29.862147       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-19 17:27:46 UTC, ends at Tue 2023-09-19 17:29:10 UTC. --
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: W0919 17:28:47.241605    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-74d5856cc6-87v9h through plugin: invalid network status for
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: E0919 17:28:47.279104    1496 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: E0919 17:28:47.279198    1496 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: E0919 17:28:47.279241    1496 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: E0919 17:28:47.279286    1496 pod_workers.go:191] Error syncing pod 2ad723ef-9de6-427f-bbd4-57c97d0303ea ("metrics-server-74d5856cc6-87v9h_kube-system(2ad723ef-9de6-427f-bbd4-57c97d0303ea)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: W0919 17:28:47.909144    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-74d5856cc6-87v9h through plugin: invalid network status for
	Sep 19 17:28:47 old-k8s-version-367105 kubelet[1496]: E0919 17:28:47.931100    1496 pod_workers.go:191] Error syncing pod 2ad723ef-9de6-427f-bbd4-57c97d0303ea ("metrics-server-74d5856cc6-87v9h_kube-system(2ad723ef-9de6-427f-bbd4-57c97d0303ea)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 17:28:48 old-k8s-version-367105 kubelet[1496]: W0919 17:28:48.331887    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-2dlmj through plugin: invalid network status for
	Sep 19 17:28:48 old-k8s-version-367105 kubelet[1496]: W0919 17:28:48.614165    1496 pod_container_deletor.go:75] Container "50cea534683501b927e31e15e88b38fd7662c43879812016c4b83ba2f09dc6d5" not found in pod's containers
	Sep 19 17:28:48 old-k8s-version-367105 kubelet[1496]: W0919 17:28:48.622214    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:28:49 old-k8s-version-367105 kubelet[1496]: W0919 17:28:49.637955    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-2dlmj through plugin: invalid network status for
	Sep 19 17:28:49 old-k8s-version-367105 kubelet[1496]: W0919 17:28:49.644860    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:28:56 old-k8s-version-367105 kubelet[1496]: W0919 17:28:56.757499    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-84b68f675b-2dlmj through plugin: invalid network status for
	Sep 19 17:29:02 old-k8s-version-367105 kubelet[1496]: E0919 17:29:02.564315    1496 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:29:02 old-k8s-version-367105 kubelet[1496]: E0919 17:29:02.564345    1496 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:29:02 old-k8s-version-367105 kubelet[1496]: E0919 17:29:02.564388    1496 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Sep 19 17:29:02 old-k8s-version-367105 kubelet[1496]: E0919 17:29:02.564423    1496 pod_workers.go:191] Error syncing pod 2ad723ef-9de6-427f-bbd4-57c97d0303ea ("metrics-server-74d5856cc6-87v9h_kube-system(2ad723ef-9de6-427f-bbd4-57c97d0303ea)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Sep 19 17:29:02 old-k8s-version-367105 kubelet[1496]: W0919 17:29:02.897451    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:29:03 old-k8s-version-367105 kubelet[1496]: E0919 17:29:03.060644    1496 pod_workers.go:191] Error syncing pod c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12 ("storage-provisioner_kube-system(c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c4f9dbb5-a4da-498f-9ffd-a9aaf04f5d12)"
	Sep 19 17:29:04 old-k8s-version-367105 kubelet[1496]: W0919 17:29:04.075964    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:29:05 old-k8s-version-367105 kubelet[1496]: W0919 17:29:05.138749    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:29:05 old-k8s-version-367105 kubelet[1496]: E0919 17:29:05.150780    1496 pod_workers.go:191] Error syncing pod 3c423704-309c-49a7-a7c6-05da5bec0ef8 ("dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"
	Sep 19 17:29:06 old-k8s-version-367105 kubelet[1496]: W0919 17:29:06.164251    1496 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-h9lt5 through plugin: invalid network status for
	Sep 19 17:29:06 old-k8s-version-367105 kubelet[1496]: E0919 17:29:06.169712    1496 pod_workers.go:191] Error syncing pod 3c423704-309c-49a7-a7c6-05da5bec0ef8 ("dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"
	Sep 19 17:29:07 old-k8s-version-367105 kubelet[1496]: E0919 17:29:07.894114    1496 pod_workers.go:191] Error syncing pod 3c423704-309c-49a7-a7c6-05da5bec0ef8 ("dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-h9lt5_kubernetes-dashboard(3c423704-309c-49a7-a7c6-05da5bec0ef8)"
	
	* 
	* ==> kubernetes-dashboard [bee6d085e6d4] <==
	* 2023/09/19 17:28:56 Using namespace: kubernetes-dashboard
	2023/09/19 17:28:56 Using in-cluster config to connect to apiserver
	2023/09/19 17:28:56 Using secret token for csrf signing
	2023/09/19 17:28:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/19 17:28:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/19 17:28:56 Successful initial request to the apiserver, version: v1.16.0
	2023/09/19 17:28:56 Generating JWE encryption key
	2023/09/19 17:28:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/19 17:28:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/19 17:28:57 Initializing JWE encryption key from synchronized object
	2023/09/19 17:28:57 Creating in-cluster Sidecar client
	2023/09/19 17:28:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/19 17:28:57 Serving insecurely on HTTP port: 9090
	2023/09/19 17:28:56 Starting overwatch
	
	* 
	* ==> storage-provisioner [8a06172dd6c6] <==
	* I0919 17:28:32.312672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 17:29:02.322553       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367105 -n old-k8s-version-367105
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-367105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-87v9h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-367105 describe pod metrics-server-74d5856cc6-87v9h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-367105 describe pod metrics-server-74d5856cc6-87v9h: exit status 1 (72.582547ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-87v9h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-367105 describe pod metrics-server-74d5856cc6-87v9h: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.88s)

                                                
                                    

Test pass (283/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.2/json-events 4.32
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.54
20 TestOffline 73.09
22 TestAddons/Setup 155.34
24 TestAddons/parallel/Registry 20.58
25 TestAddons/parallel/Ingress 25.15
26 TestAddons/parallel/InspektorGadget 11
27 TestAddons/parallel/MetricsServer 6.3
28 TestAddons/parallel/HelmTiller 14.19
30 TestAddons/parallel/CSI 57.64
31 TestAddons/parallel/Headlamp 17.4
32 TestAddons/parallel/CloudSpanner 6.06
35 TestAddons/serial/GCPAuth/Namespaces 0.14
36 TestAddons/StoppedEnableDisable 13.37
37 TestCertOptions 63.42
38 TestCertExpiration 307.96
39 TestDockerFlags 66.09
40 TestForceSystemdFlag 73.02
41 TestForceSystemdEnv 91.77
43 TestKVMDriverInstallOrUpdate 4.03
47 TestErrorSpam/setup 52.12
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.71
50 TestErrorSpam/pause 1.19
51 TestErrorSpam/unpause 1.24
52 TestErrorSpam/stop 3.51
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 105.98
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 40.61
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.31
64 TestFunctional/serial/CacheCmd/cache/add_local 1.27
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 42.86
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.09
75 TestFunctional/serial/LogsFileCmd 1.1
76 TestFunctional/serial/InvalidService 5.19
78 TestFunctional/parallel/ConfigCmd 0.31
79 TestFunctional/parallel/DashboardCmd 24.4
80 TestFunctional/parallel/DryRun 0.3
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 1.03
86 TestFunctional/parallel/ServiceCmdConnect 8.49
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 55.03
90 TestFunctional/parallel/SSHCmd 0.43
91 TestFunctional/parallel/CpCmd 1.01
92 TestFunctional/parallel/MySQL 44.12
93 TestFunctional/parallel/FileSync 0.24
94 TestFunctional/parallel/CertSync 1.51
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.22
102 TestFunctional/parallel/License 0.18
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
104 TestFunctional/parallel/ServiceCmd/DeployApp 12.26
105 TestFunctional/parallel/MountCmd/any-port 10.6
106 TestFunctional/parallel/ProfileCmd/profile_list 0.34
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
117 TestFunctional/parallel/MountCmd/specific-port 1.74
118 TestFunctional/parallel/ServiceCmd/List 0.47
119 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
122 TestFunctional/parallel/ServiceCmd/Format 0.33
123 TestFunctional/parallel/ServiceCmd/URL 0.33
124 TestFunctional/parallel/Version/short 0.04
125 TestFunctional/parallel/Version/components 0.67
126 TestFunctional/parallel/DockerEnv/bash 0.89
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
134 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
135 TestFunctional/parallel/ImageCommands/Setup 1.31
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.86
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.53
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.73
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.95
143 TestFunctional/delete_addon-resizer_images 0.06
144 TestFunctional/delete_my-image_image 0.01
145 TestFunctional/delete_minikube_cached_images 0.01
146 TestGvisorAddon 356.38
149 TestImageBuild/serial/Setup 51.62
150 TestImageBuild/serial/NormalBuild 1.65
151 TestImageBuild/serial/BuildWithBuildArg 1.31
152 TestImageBuild/serial/BuildWithDockerIgnore 0.39
153 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.27
156 TestIngressAddonLegacy/StartLegacyK8sCluster 109.86
158 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.4
159 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.5
160 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.83
163 TestJSONOutput/start/Command 105.61
164 TestJSONOutput/start/Audit 0
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/pause/Command 0.56
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.51
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 13.09
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.18
191 TestMainNoArgs 0.04
192 TestMinikubeProfile 110.92
195 TestMountStart/serial/StartWithMountFirst 31.49
196 TestMountStart/serial/VerifyMountFirst 0.37
197 TestMountStart/serial/StartWithMountSecond 28.67
198 TestMountStart/serial/VerifyMountSecond 0.37
199 TestMountStart/serial/DeleteFirst 0.84
200 TestMountStart/serial/VerifyMountPostDelete 0.37
201 TestMountStart/serial/Stop 2.14
202 TestMountStart/serial/RestartStopped 24.3
203 TestMountStart/serial/VerifyMountPostStop 0.37
206 TestMultiNode/serial/FreshStart2Nodes 133.94
207 TestMultiNode/serial/DeployApp2Nodes 4.81
208 TestMultiNode/serial/PingHostFrom2Pods 0.84
209 TestMultiNode/serial/AddNode 44.53
210 TestMultiNode/serial/ProfileList 0.21
211 TestMultiNode/serial/CopyFile 7.2
212 TestMultiNode/serial/StopNode 3.96
214 TestMultiNode/serial/RestartKeepsNodes 258.69
215 TestMultiNode/serial/DeleteNode 1.76
216 TestMultiNode/serial/StopMultiNode 26.29
217 TestMultiNode/serial/RestartMultiNode 100.37
218 TestMultiNode/serial/ValidateNameConflict 53.35
223 TestPreload 195.01
226 TestSkaffold 139.55
229 TestRunningBinaryUpgrade 185.25
231 TestKubernetesUpgrade 238.6
235 TestPause/serial/Start 135.77
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
255 TestNoKubernetes/serial/StartWithK8s 117.52
256 TestNoKubernetes/serial/StartWithStopK8s 29.44
257 TestPause/serial/SecondStartNoReconfiguration 38.44
258 TestNoKubernetes/serial/Start 30.2
259 TestPause/serial/Pause 0.72
260 TestPause/serial/VerifyStatus 0.28
261 TestPause/serial/Unpause 0.75
262 TestPause/serial/PauseAgain 0.75
263 TestPause/serial/DeletePaused 0.99
264 TestPause/serial/VerifyDeletedResources 4.81
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
266 TestNoKubernetes/serial/ProfileList 6.36
267 TestNoKubernetes/serial/Stop 2.11
268 TestNoKubernetes/serial/StartNoArgs 92.75
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
270 TestStoppedBinaryUpgrade/Setup 0.27
271 TestStoppedBinaryUpgrade/Upgrade 222.18
272 TestNetworkPlugins/group/auto/Start 133.15
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.6
274 TestNetworkPlugins/group/kindnet/Start 109.81
275 TestNetworkPlugins/group/calico/Start 127.67
276 TestNetworkPlugins/group/auto/KubeletFlags 0.22
277 TestNetworkPlugins/group/auto/NetCatPod 12.55
278 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
279 TestNetworkPlugins/group/custom-flannel/Start 90.29
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
281 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
282 TestNetworkPlugins/group/auto/DNS 0.25
283 TestNetworkPlugins/group/auto/Localhost 0.18
284 TestNetworkPlugins/group/auto/HairPin 0.15
285 TestNetworkPlugins/group/kindnet/DNS 0.26
286 TestNetworkPlugins/group/kindnet/Localhost 0.23
287 TestNetworkPlugins/group/kindnet/HairPin 0.22
288 TestNetworkPlugins/group/false/Start 89.8
289 TestNetworkPlugins/group/enable-default-cni/Start 110.28
290 TestNetworkPlugins/group/calico/ControllerPod 5.03
291 TestNetworkPlugins/group/calico/KubeletFlags 0.21
292 TestNetworkPlugins/group/calico/NetCatPod 13.41
293 TestNetworkPlugins/group/calico/DNS 0.22
294 TestNetworkPlugins/group/calico/Localhost 0.22
295 TestNetworkPlugins/group/calico/HairPin 0.21
296 TestNetworkPlugins/group/flannel/Start 97.24
297 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
298 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
299 TestNetworkPlugins/group/custom-flannel/DNS 0.29
300 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
301 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
302 TestNetworkPlugins/group/false/KubeletFlags 0.21
303 TestNetworkPlugins/group/false/NetCatPod 12.45
304 TestNetworkPlugins/group/bridge/Start 83.15
305 TestNetworkPlugins/group/false/DNS 0.21
306 TestNetworkPlugins/group/false/Localhost 0.17
307 TestNetworkPlugins/group/false/HairPin 0.19
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.54
310 TestNetworkPlugins/group/kubenet/Start 91.16
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
315 TestStartStop/group/old-k8s-version/serial/FirstStart 164.77
316 TestNetworkPlugins/group/flannel/ControllerPod 5.02
317 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
318 TestNetworkPlugins/group/flannel/NetCatPod 14.5
319 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
320 TestNetworkPlugins/group/flannel/DNS 0.26
321 TestNetworkPlugins/group/bridge/NetCatPod 12.62
322 TestNetworkPlugins/group/flannel/Localhost 0.28
323 TestNetworkPlugins/group/flannel/HairPin 0.24
324 TestNetworkPlugins/group/bridge/DNS 0.23
325 TestNetworkPlugins/group/bridge/Localhost 0.19
326 TestNetworkPlugins/group/bridge/HairPin 0.22
328 TestStartStop/group/no-preload/serial/FirstStart 90.5
330 TestStartStop/group/embed-certs/serial/FirstStart 95.36
331 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
332 TestNetworkPlugins/group/kubenet/NetCatPod 11.44
333 TestNetworkPlugins/group/kubenet/DNS 0.21
334 TestNetworkPlugins/group/kubenet/Localhost 0.16
335 TestNetworkPlugins/group/kubenet/HairPin 0.15
337 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 93.1
338 TestStartStop/group/no-preload/serial/DeployApp 11.57
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
340 TestStartStop/group/no-preload/serial/Stop 13.12
341 TestStartStop/group/embed-certs/serial/DeployApp 9.56
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/old-k8s-version/serial/DeployApp 9.52
344 TestStartStop/group/no-preload/serial/SecondStart 329.52
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
346 TestStartStop/group/embed-certs/serial/Stop 13.12
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
348 TestStartStop/group/old-k8s-version/serial/Stop 13.24
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
350 TestStartStop/group/embed-certs/serial/SecondStart 312.01
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.52
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
353 TestStartStop/group/old-k8s-version/serial/SecondStart 83.63
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.12
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 335.67
358 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 24.02
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
361 TestStartStop/group/old-k8s-version/serial/Pause 2.55
363 TestStartStop/group/newest-cni/serial/FirstStart 75.53
364 TestStartStop/group/newest-cni/serial/DeployApp 0
365 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
366 TestStartStop/group/newest-cni/serial/Stop 8.11
367 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
368 TestStartStop/group/newest-cni/serial/SecondStart 48.01
369 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
370 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
371 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
372 TestStartStop/group/newest-cni/serial/Pause 2.36
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 22.03
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
377 TestStartStop/group/embed-certs/serial/Pause 2.68
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/no-preload/serial/Pause 2.38
381 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
382 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.36
x
+
TestDownloadOnly/v1.16.0/json-events (12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-976297 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-976297 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (11.998510736s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-976297
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-976297: exit status 85 (55.457902ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-976297 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |          |
	|         | -p download-only-976297        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:34:27
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:34:27.730529   73409 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:34:27.730631   73409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:27.730642   73409 out.go:309] Setting ErrFile to fd 2...
	I0919 16:34:27.730648   73409 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:27.730857   73409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	W0919 16:34:27.731015   73409 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17240-65689/.minikube/config/config.json: open /home/jenkins/minikube-integration/17240-65689/.minikube/config/config.json: no such file or directory
	I0919 16:34:27.731677   73409 out.go:303] Setting JSON to true
	I0919 16:34:27.732586   73409 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4381,"bootTime":1695136887,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:34:27.732638   73409 start.go:138] virtualization: kvm guest
	I0919 16:34:27.735121   73409 out.go:97] [download-only-976297] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:34:27.736689   73409 out.go:169] MINIKUBE_LOCATION=17240
	W0919 16:34:27.735235   73409 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 16:34:27.735308   73409 notify.go:220] Checking for updates...
	I0919 16:34:27.739307   73409 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:34:27.740768   73409 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:34:27.742116   73409 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:34:27.743326   73409 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 16:34:27.745685   73409 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 16:34:27.745965   73409 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:34:27.780928   73409 out.go:97] Using the kvm2 driver based on user configuration
	I0919 16:34:27.780951   73409 start.go:298] selected driver: kvm2
	I0919 16:34:27.780956   73409 start.go:902] validating driver "kvm2" against <nil>
	I0919 16:34:27.781305   73409 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:34:27.781390   73409 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17240-65689/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 16:34:27.795955   73409 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0919 16:34:27.796000   73409 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0919 16:34:27.796499   73409 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0919 16:34:27.796630   73409 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 16:34:27.796662   73409 cni.go:84] Creating CNI manager for ""
	I0919 16:34:27.796677   73409 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0919 16:34:27.796688   73409 start_flags.go:321] config:
	{Name:download-only-976297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-976297 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:34:27.796889   73409 iso.go:125] acquiring lock: {Name:mkdf0d42546c83faf1a624ccdb8d9876db7a1a92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 16:34:27.798598   73409 out.go:97] Downloading VM boot image ...
	I0919 16:34:27.798627   73409 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0919 16:34:31.406000   73409 out.go:97] Starting control plane node download-only-976297 in cluster download-only-976297
	I0919 16:34:31.406026   73409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 16:34:31.432904   73409 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0919 16:34:31.432936   73409 cache.go:57] Caching tarball of preloaded images
	I0919 16:34:31.433120   73409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 16:34:31.434989   73409 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0919 16:34:31.435004   73409 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 16:34:31.463450   73409 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0919 16:34:33.734306   73409 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 16:34:33.734395   73409 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17240-65689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0919 16:34:34.471675   73409 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0919 16:34:34.472012   73409 profile.go:148] Saving config to /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/download-only-976297/config.json ...
	I0919 16:34:34.472042   73409 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/download-only-976297/config.json: {Name:mke97728537f83944b3770b20354645c9418e56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 16:34:34.472320   73409 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0919 16:34:34.472551   73409 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17240-65689/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-976297"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (4.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-976297 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-976297 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (4.323461115s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (4.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-976297
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-976297: exit status 85 (54.164988ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-976297 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |          |
	|         | -p download-only-976297        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-976297 | jenkins | v1.31.2 | 19 Sep 23 16:34 UTC |          |
	|         | -p download-only-976297        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/19 16:34:39
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 16:34:39.787174   73476 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:34:39.787280   73476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:39.787284   73476 out.go:309] Setting ErrFile to fd 2...
	I0919 16:34:39.787288   73476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:34:39.787435   73476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	W0919 16:34:39.787540   73476 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17240-65689/.minikube/config/config.json: open /home/jenkins/minikube-integration/17240-65689/.minikube/config/config.json: no such file or directory
	I0919 16:34:39.787931   73476 out.go:303] Setting JSON to true
	I0919 16:34:39.788716   73476 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4393,"bootTime":1695136887,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:34:39.788768   73476 start.go:138] virtualization: kvm guest
	I0919 16:34:39.790990   73476 out.go:97] [download-only-976297] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:34:39.792673   73476 out.go:169] MINIKUBE_LOCATION=17240
	I0919 16:34:39.791131   73476 notify.go:220] Checking for updates...
	I0919 16:34:39.795461   73476 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:34:39.797010   73476 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:34:39.798365   73476 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:34:39.799894   73476 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-976297"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-976297
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-458010 --alsologtostderr --binary-mirror http://127.0.0.1:36289 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-458010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-458010
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (73.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-265816 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-265816 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m12.04704666s)
helpers_test.go:175: Cleaning up "offline-docker-265816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-265816
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-265816: (1.039267609s)
--- PASS: TestOffline (73.09s)

                                                
                                    
x
+
TestAddons/Setup (155.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-528212 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-528212 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.340367351s)
--- PASS: TestAddons/Setup (155.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 22.613945ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rmdss" [dbc0ca03-2a49-4599-bf29-7c7f09262ea6] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023146931s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mjrj5" [93739d66-8a4a-4f9f-bf46-64f736c21232] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014200754s
addons_test.go:316: (dbg) Run:  kubectl --context addons-528212 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-528212 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-528212 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.53958701s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 ip
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.58s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-528212 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-528212 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-528212 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b61a65ac-cc34-4661-a09e-8f45c155b65c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b61a65ac-cc34-4661-a09e-8f45c155b65c] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.017585657s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-528212 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.50.42
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-528212 addons disable ingress-dns --alsologtostderr -v=1: (2.46795688s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-528212 addons disable ingress --alsologtostderr -v=1: (7.714382211s)
--- PASS: TestAddons/parallel/Ingress (25.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tr78d" [b32d04ff-71e2-4e70-8141-f82c95736fa9] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01191481s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-528212
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-528212: (5.988576139s)
--- PASS: TestAddons/parallel/InspektorGadget (11.00s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 23.4626ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-bl88q" [f15f6e45-9dd9-48f3-a7f3-8ce5f0858491] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.020607853s
addons_test.go:391: (dbg) Run:  kubectl --context addons-528212 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-528212 addons disable metrics-server --alsologtostderr -v=1: (1.174052452s)
--- PASS: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 22.777114ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-q2pns" [8b3fb88e-001b-43e1-bc5c-10271eeef555] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01842379s
addons_test.go:449: (dbg) Run:  kubectl --context addons-528212 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-528212 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.517409173s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 7.214421ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-528212 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/09/19 16:37:40 [DEBUG] GET http://192.168.50.42:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-528212 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6e7c4f65-063c-4822-b6fb-1af07966999d] Pending
helpers_test.go:344: "task-pv-pod" [6e7c4f65-063c-4822-b6fb-1af07966999d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6e7c4f65-063c-4822-b6fb-1af07966999d] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.01572576s
addons_test.go:560: (dbg) Run:  kubectl --context addons-528212 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-528212 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-528212 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-528212 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-528212 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-528212 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-528212 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-528212 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-528212 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d3341a1a-bb16-49c6-8cff-84ef9f143ae5] Pending
helpers_test.go:344: "task-pv-pod-restore" [d3341a1a-bb16-49c6-8cff-84ef9f143ae5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d3341a1a-bb16-49c6-8cff-84ef9f143ae5] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.015157714s
addons_test.go:602: (dbg) Run:  kubectl --context addons-528212 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-528212 delete pod task-pv-pod-restore: (1.298058576s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-528212 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-528212 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-528212 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.691194436s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-528212 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-528212 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-528212 --alsologtostderr -v=1: (1.357757817s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-59d2x" [c6d6099b-2c4f-43df-b9ee-f012658b146e] Pending
helpers_test.go:344: "headlamp-699c48fb74-59d2x" [c6d6099b-2c4f-43df-b9ee-f012658b146e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-59d2x" [c6d6099b-2c4f-43df-b9ee-f012658b146e] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-59d2x" [c6d6099b-2c4f-43df-b9ee-f012658b146e] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.043174071s
--- PASS: TestAddons/parallel/Headlamp (17.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-bxlb5" [2448df3b-9959-46c1-a3a3-1f104857fd2d] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.018623613s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-528212
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-528212: (1.017346106s)
--- PASS: TestAddons/parallel/CloudSpanner (6.06s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-528212 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-528212 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-528212
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-528212: (13.123644851s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-528212
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-528212
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-528212
--- PASS: TestAddons/StoppedEnableDisable (13.37s)

                                                
                                    
x
+
TestCertOptions (63.42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-868261 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-868261 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m2.005457317s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-868261 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-868261 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-868261 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-868261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-868261
--- PASS: TestCertOptions (63.42s)

                                                
                                    
x
+
TestCertExpiration (307.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-524971 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-524971 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m5.667980774s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-524971 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0919 17:20:41.231933   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-524971 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m1.153893898s)
helpers_test.go:175: Cleaning up "cert-expiration-524971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-524971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-524971: (1.139758794s)
--- PASS: TestCertExpiration (307.96s)

                                                
                                    
x
+
TestDockerFlags (66.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-554738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0919 17:17:03.158880   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-554738 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m4.542099021s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-554738 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-554738 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-554738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-554738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-554738: (1.069988836s)
--- PASS: TestDockerFlags (66.09s)

                                                
                                    
x
+
TestForceSystemdFlag (73.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-966284 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-966284 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m11.690212111s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-966284 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-966284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-966284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-966284: (1.096809642s)
--- PASS: TestForceSystemdFlag (73.02s)

                                                
                                    
x
+
TestForceSystemdEnv (91.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-603188 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E0919 17:15:23.423317   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-603188 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m30.218295704s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-603188 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-603188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-603188
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-603188: (1.160592166s)
--- PASS: TestForceSystemdEnv (91.77s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                    
x
+
TestErrorSpam/setup (52.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-832384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-832384 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-832384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-832384 --driver=kvm2 : (52.122541269s)
--- PASS: TestErrorSpam/setup (52.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 unpause
--- PASS: TestErrorSpam/unpause (1.24s)

                                                
                                    
x
+
TestErrorSpam/stop (3.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 stop: (3.386784637s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-832384 --log_dir /tmp/nospam-832384 stop
--- PASS: TestErrorSpam/stop (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17240-65689/.minikube/files/etc/test/nested/copy/73397/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (105.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-973448 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m45.975767893s)
--- PASS: TestFunctional/serial/StartWithProxy (105.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --alsologtostderr -v=8
E0919 16:42:20.371841   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.377886   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.388133   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.408985   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.449421   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.529631   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:20.690432   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:21.011736   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-973448 --alsologtostderr -v=8: (40.605839326s)
functional_test.go:659: soft start took 40.606510356s for "functional-973448" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-973448 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache add registry.k8s.io/pause:3.1
E0919 16:42:21.651881   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache add registry.k8s.io/pause:3.3
E0919 16:42:22.932192   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-973448 /tmp/TestFunctionalserialCacheCmdcacheadd_local163636471/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache add minikube-local-cache-test:functional-973448
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache delete minikube-local-cache-test:functional-973448
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-973448
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh sudo docker rmi registry.k8s.io/pause:latest
E0919 16:42:25.493290   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.620872ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 kubectl -- --context functional-973448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-973448 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 16:42:30.614022   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:42:40.854360   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:43:01.334677   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-973448 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.85716044s)
functional_test.go:757: restart took 42.857265517s for "functional-973448" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-973448 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 logs: (1.08489586s)
--- PASS: TestFunctional/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 logs --file /tmp/TestFunctionalserialLogsFileCmd1430630807/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 logs --file /tmp/TestFunctionalserialLogsFileCmd1430630807/001/logs.txt: (1.100312809s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-973448 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-973448
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-973448: exit status 115 (286.185011ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.61.43:31617 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-973448 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-973448 delete -f testdata/invalidsvc.yaml: (1.621341888s)
--- PASS: TestFunctional/serial/InvalidService (5.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 config get cpus: exit status 14 (58.17203ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 config get cpus: exit status 14 (42.897727ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (24.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-973448 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-973448 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 78778: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (24.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-973448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (156.291416ms)

                                                
                                                
-- stdout --
	* [functional-973448] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:43:18.782661   78657 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:43:18.783026   78657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:43:18.783041   78657 out.go:309] Setting ErrFile to fd 2...
	I0919 16:43:18.783048   78657 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:43:18.783347   78657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 16:43:18.784055   78657 out.go:303] Setting JSON to false
	I0919 16:43:18.785384   78657 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4912,"bootTime":1695136887,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:43:18.785474   78657 start.go:138] virtualization: kvm guest
	I0919 16:43:18.788145   78657 out.go:177] * [functional-973448] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0919 16:43:18.790144   78657 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:43:18.790286   78657 notify.go:220] Checking for updates...
	I0919 16:43:18.791927   78657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:43:18.793573   78657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:43:18.795172   78657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:43:18.797670   78657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:43:18.799493   78657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:43:18.801461   78657 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:43:18.802045   78657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:43:18.802102   78657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:43:18.818522   78657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0919 16:43:18.819127   78657 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:43:18.819612   78657 main.go:141] libmachine: Using API Version  1
	I0919 16:43:18.819632   78657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:43:18.820160   78657 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:43:18.820288   78657 main.go:141] libmachine: (functional-973448) Calling .DriverName
	I0919 16:43:18.820507   78657 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:43:18.820921   78657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:43:18.820973   78657 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:43:18.837945   78657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0919 16:43:18.838447   78657 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:43:18.839087   78657 main.go:141] libmachine: Using API Version  1
	I0919 16:43:18.839121   78657 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:43:18.839544   78657 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:43:18.839772   78657 main.go:141] libmachine: (functional-973448) Calling .DriverName
	I0919 16:43:18.879613   78657 out.go:177] * Using the kvm2 driver based on existing profile
	I0919 16:43:18.881313   78657 start.go:298] selected driver: kvm2
	I0919 16:43:18.881329   78657 start.go:902] validating driver "kvm2" against &{Name:functional-973448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-973448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.61.43 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:43:18.881480   78657 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:43:18.884402   78657 out.go:177] 
	W0919 16:43:18.885940   78657 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 16:43:18.887601   78657 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-973448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-973448 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (139.324835ms)

                                                
                                                
-- stdout --
	* [functional-973448] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:43:18.557745   78584 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:43:18.557916   78584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:43:18.557928   78584 out.go:309] Setting ErrFile to fd 2...
	I0919 16:43:18.557935   78584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:43:18.558332   78584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 16:43:18.558959   78584 out.go:303] Setting JSON to false
	I0919 16:43:18.560163   78584 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4912,"bootTime":1695136887,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 16:43:18.560247   78584 start.go:138] virtualization: kvm guest
	I0919 16:43:18.562611   78584 out.go:177] * [functional-973448] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0919 16:43:18.564525   78584 out.go:177]   - MINIKUBE_LOCATION=17240
	I0919 16:43:18.564592   78584 notify.go:220] Checking for updates...
	I0919 16:43:18.566062   78584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 16:43:18.567497   78584 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	I0919 16:43:18.568861   78584 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	I0919 16:43:18.570234   78584 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 16:43:18.571707   78584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 16:43:18.574699   78584 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:43:18.575324   78584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:43:18.575379   78584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:43:18.590633   78584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0919 16:43:18.591001   78584 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:43:18.591517   78584 main.go:141] libmachine: Using API Version  1
	I0919 16:43:18.591545   78584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:43:18.591904   78584 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:43:18.592073   78584 main.go:141] libmachine: (functional-973448) Calling .DriverName
	I0919 16:43:18.592290   78584 driver.go:373] Setting default libvirt URI to qemu:///system
	I0919 16:43:18.592564   78584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:43:18.592599   78584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:43:18.609102   78584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43925
	I0919 16:43:18.609435   78584 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:43:18.609873   78584 main.go:141] libmachine: Using API Version  1
	I0919 16:43:18.609897   78584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:43:18.610158   78584 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:43:18.610358   78584 main.go:141] libmachine: (functional-973448) Calling .DriverName
	I0919 16:43:18.643159   78584 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0919 16:43:18.644673   78584 start.go:298] selected driver: kvm2
	I0919 16:43:18.644693   78584 start.go:902] validating driver "kvm2" against &{Name:functional-973448 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-973448 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.61.43 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0919 16:43:18.644856   78584 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 16:43:18.647450   78584 out.go:177] 
	W0919 16:43:18.648934   78584 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 16:43:18.650459   78584 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-973448 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-973448 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-pm5xp" [cbec30d8-efda-447e-9d18-d0860b46f1d6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-pm5xp" [cbec30d8-efda-447e-9d18-d0860b46f1d6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.013227475s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.61.43:31007
functional_test.go:1674: http://192.168.61.43:31007: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-pm5xp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.61.43:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.61.43:31007
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6a4a8921-979c-4818-815b-bd26a44fb9e1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.017468413s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-973448 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-973448 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-973448 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-973448 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-973448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1d6f7cc3-6c54-414f-9f19-96468a1734ba] Pending
helpers_test.go:344: "sp-pod" [1d6f7cc3-6c54-414f-9f19-96468a1734ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1d6f7cc3-6c54-414f-9f19-96468a1734ba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.227040516s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-973448 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-973448 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-973448 delete -f testdata/storage-provisioner/pod.yaml: (1.708987769s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-973448 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06c52e9b-230e-404b-a3fe-d80ebe9e1f42] Pending
helpers_test.go:344: "sp-pod" [06c52e9b-230e-404b-a3fe-d80ebe9e1f42] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06c52e9b-230e-404b-a3fe-d80ebe9e1f42] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.106002964s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-973448 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh -n functional-973448 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 cp functional-973448:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd308293756/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh -n functional-973448 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (44.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-973448 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-k8wl7" [d79c7dbf-dab4-4978-89c1-0465a1672438] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-k8wl7" [d79c7dbf-dab4-4978-89c1-0465a1672438] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 38.020216028s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;": exit status 1 (184.195652ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;": exit status 1 (198.122507ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;": exit status 1 (203.268941ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-973448 exec mysql-859648c796-k8wl7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (44.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/73397/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /etc/test/nested/copy/73397/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/73397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /etc/ssl/certs/73397.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/73397.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /usr/share/ca-certificates/73397.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/733972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /etc/ssl/certs/733972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/733972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /usr/share/ca-certificates/733972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-973448 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh "sudo systemctl is-active crio": exit status 1 (215.60812ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-973448 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-973448 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-947gd" [b9d73474-bd7c-46ca-bccf-50bf7c99f561] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-947gd" [b9d73474-bd7c-46ca-bccf-50bf7c99f561] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.028715175s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdany-port2261986591/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695141797392299322" to /tmp/TestFunctionalparallelMountCmdany-port2261986591/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695141797392299322" to /tmp/TestFunctionalparallelMountCmdany-port2261986591/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695141797392299322" to /tmp/TestFunctionalparallelMountCmdany-port2261986591/001/test-1695141797392299322
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.247523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 16:43 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 16:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 16:43 test-1695141797392299322
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh cat /mount-9p/test-1695141797392299322
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-973448 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [868cf8b1-36d5-4fc4-a0c2-f7905f776af0] Pending
helpers_test.go:344: "busybox-mount" [868cf8b1-36d5-4fc4-a0c2-f7905f776af0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [868cf8b1-36d5-4fc4-a0c2-f7905f776af0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [868cf8b1-36d5-4fc4-a0c2-f7905f776af0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.023470557s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-973448 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdany-port2261986591/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "300.754283ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "40.548494ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "232.893111ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "42.044508ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdspecific-port1238783905/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (210.698251ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdspecific-port1238783905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh "sudo umount -f /mount-9p": exit status 1 (231.69165ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-973448 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdspecific-port1238783905/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T" /mount1: exit status 1 (265.294289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-973448 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-973448 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3120693114/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service list -o json
functional_test.go:1493: Took "574.254253ms" to run "out/minikube-linux-amd64 -p functional-973448 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.61.43:31448
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.61.43:31448
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-973448 docker-env) && out/minikube-linux-amd64 status -p functional-973448"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-973448 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-973448 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-973448
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-973448
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-973448 image ls --format short --alsologtostderr:
I0919 16:44:00.268624   80480 out.go:296] Setting OutFile to fd 1 ...
I0919 16:44:00.268903   80480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.268912   80480 out.go:309] Setting ErrFile to fd 2...
I0919 16:44:00.268920   80480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.269209   80480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:44:00.269802   80480 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.269956   80480 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.270399   80480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.270455   80480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.284414   80480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
I0919 16:44:00.284878   80480 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.285450   80480 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.285476   80480 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.285890   80480 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.286111   80480 main.go:141] libmachine: (functional-973448) Calling .GetState
I0919 16:44:00.288283   80480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.288327   80480 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.302023   80480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32863
I0919 16:44:00.302460   80480 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.303053   80480 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.303070   80480 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.303406   80480 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.303579   80480 main.go:141] libmachine: (functional-973448) Calling .DriverName
I0919 16:44:00.303774   80480 ssh_runner.go:195] Run: systemctl --version
I0919 16:44:00.303809   80480 main.go:141] libmachine: (functional-973448) Calling .GetSSHHostname
I0919 16:44:00.307389   80480 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.307891   80480 main.go:141] libmachine: (functional-973448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e3:e9", ip: ""} in network mk-functional-973448: {Iface:virbr2 ExpiryTime:2023-09-19 17:40:10 +0000 UTC Type:0 Mac:52:54:00:73:e3:e9 Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:functional-973448 Clientid:01:52:54:00:73:e3:e9}
I0919 16:44:00.307933   80480 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined IP address 192.168.61.43 and MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.308034   80480 main.go:141] libmachine: (functional-973448) Calling .GetSSHPort
I0919 16:44:00.308171   80480 main.go:141] libmachine: (functional-973448) Calling .GetSSHKeyPath
I0919 16:44:00.308304   80480 main.go:141] libmachine: (functional-973448) Calling .GetSSHUsername
I0919 16:44:00.308434   80480 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/functional-973448/id_rsa Username:docker}
I0919 16:44:00.417943   80480 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0919 16:44:00.500042   80480 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.500054   80480 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.500323   80480 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.500344   80480 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:44:00.500355   80480 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.500366   80480 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.500625   80480 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.500642   80480 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:00.500645   80480 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-973448 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| docker.io/library/nginx                     | latest            | f5a6b296b8a29 | 187MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-973448 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-973448 | e1849a3f9dc2a | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-973448 image ls --format table --alsologtostderr:
I0919 16:44:00.800886   80607 out.go:296] Setting OutFile to fd 1 ...
I0919 16:44:00.800992   80607 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.801001   80607 out.go:309] Setting ErrFile to fd 2...
I0919 16:44:00.801005   80607 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.801209   80607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:44:00.801816   80607 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.801914   80607 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.802305   80607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.802343   80607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.816398   80607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44549
I0919 16:44:00.816827   80607 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.817443   80607 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.817474   80607 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.817885   80607 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.818084   80607 main.go:141] libmachine: (functional-973448) Calling .GetState
I0919 16:44:00.819797   80607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.819835   80607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.833596   80607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46539
I0919 16:44:00.834024   80607 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.834509   80607 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.834558   80607 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.834877   80607 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.835052   80607 main.go:141] libmachine: (functional-973448) Calling .DriverName
I0919 16:44:00.835255   80607 ssh_runner.go:195] Run: systemctl --version
I0919 16:44:00.835284   80607 main.go:141] libmachine: (functional-973448) Calling .GetSSHHostname
I0919 16:44:00.838131   80607 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.838523   80607 main.go:141] libmachine: (functional-973448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e3:e9", ip: ""} in network mk-functional-973448: {Iface:virbr2 ExpiryTime:2023-09-19 17:40:10 +0000 UTC Type:0 Mac:52:54:00:73:e3:e9 Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:functional-973448 Clientid:01:52:54:00:73:e3:e9}
I0919 16:44:00.838553   80607 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined IP address 192.168.61.43 and MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.838691   80607 main.go:141] libmachine: (functional-973448) Calling .GetSSHPort
I0919 16:44:00.838868   80607 main.go:141] libmachine: (functional-973448) Calling .GetSSHKeyPath
I0919 16:44:00.839019   80607 main.go:141] libmachine: (functional-973448) Calling .GetSSHUsername
I0919 16:44:00.839174   80607 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/functional-973448/id_rsa Username:docker}
I0919 16:44:00.948990   80607 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0919 16:44:01.005808   80607 main.go:141] libmachine: Making call to close driver server
I0919 16:44:01.005835   80607 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:01.006138   80607 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:01.006160   80607 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:01.006167   80607 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:44:01.006178   80607 main.go:141] libmachine: Making call to close driver server
I0919 16:44:01.006187   80607 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:01.006414   80607 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:01.006509   80607 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:01.006557   80607 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-973448 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e6f1816883972d4be47bd48879a08
919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon
-resizer:functional-973448"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e1849a3f9dc2afbfa51d0cb9f58c59bb0e84e980ed63922df50331816600c958","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-973448"],"size":"30"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-973448 image ls --format json --alsologtostderr:
I0919 16:44:00.545276   80548 out.go:296] Setting OutFile to fd 1 ...
I0919 16:44:00.545836   80548 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.545880   80548 out.go:309] Setting ErrFile to fd 2...
I0919 16:44:00.545900   80548 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.546331   80548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:44:00.547542   80548 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.547792   80548 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.548318   80548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.548865   80548 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.562001   80548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
I0919 16:44:00.562526   80548 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.563078   80548 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.563097   80548 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.563535   80548 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.563726   80548 main.go:141] libmachine: (functional-973448) Calling .GetState
I0919 16:44:00.565843   80548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.565895   80548 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.579990   80548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
I0919 16:44:00.580345   80548 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.580789   80548 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.580812   80548 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.581131   80548 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.581325   80548 main.go:141] libmachine: (functional-973448) Calling .DriverName
I0919 16:44:00.581529   80548 ssh_runner.go:195] Run: systemctl --version
I0919 16:44:00.581557   80548 main.go:141] libmachine: (functional-973448) Calling .GetSSHHostname
I0919 16:44:00.584049   80548 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.584486   80548 main.go:141] libmachine: (functional-973448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e3:e9", ip: ""} in network mk-functional-973448: {Iface:virbr2 ExpiryTime:2023-09-19 17:40:10 +0000 UTC Type:0 Mac:52:54:00:73:e3:e9 Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:functional-973448 Clientid:01:52:54:00:73:e3:e9}
I0919 16:44:00.584512   80548 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined IP address 192.168.61.43 and MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.584622   80548 main.go:141] libmachine: (functional-973448) Calling .GetSSHPort
I0919 16:44:00.584757   80548 main.go:141] libmachine: (functional-973448) Calling .GetSSHKeyPath
I0919 16:44:00.584857   80548 main.go:141] libmachine: (functional-973448) Calling .GetSSHUsername
I0919 16:44:00.584999   80548 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/functional-973448/id_rsa Username:docker}
I0919 16:44:00.700940   80548 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0919 16:44:00.750988   80548 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.751004   80548 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.751259   80548 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.751290   80548 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:00.751299   80548 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:44:00.751319   80548 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.751333   80548 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.751559   80548 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:00.751623   80548 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.751642   80548 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-973448 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e1849a3f9dc2afbfa51d0cb9f58c59bb0e84e980ed63922df50331816600c958
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-973448
size: "30"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-973448
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-973448 image ls --format yaml --alsologtostderr:
I0919 16:44:00.268620   80481 out.go:296] Setting OutFile to fd 1 ...
I0919 16:44:00.268746   80481 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.268758   80481 out.go:309] Setting ErrFile to fd 2...
I0919 16:44:00.268766   80481 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.269036   80481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:44:00.269891   80481 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.270046   80481 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.270593   80481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.270659   80481 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.284412   80481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34871
I0919 16:44:00.284907   80481 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.285490   80481 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.285516   80481 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.285861   80481 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.286046   80481 main.go:141] libmachine: (functional-973448) Calling .GetState
I0919 16:44:00.288072   80481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.288116   80481 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.302156   80481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
I0919 16:44:00.302546   80481 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.303050   80481 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.303073   80481 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.303441   80481 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.303612   80481 main.go:141] libmachine: (functional-973448) Calling .DriverName
I0919 16:44:00.303799   80481 ssh_runner.go:195] Run: systemctl --version
I0919 16:44:00.303820   80481 main.go:141] libmachine: (functional-973448) Calling .GetSSHHostname
I0919 16:44:00.307678   80481 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.308645   80481 main.go:141] libmachine: (functional-973448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e3:e9", ip: ""} in network mk-functional-973448: {Iface:virbr2 ExpiryTime:2023-09-19 17:40:10 +0000 UTC Type:0 Mac:52:54:00:73:e3:e9 Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:functional-973448 Clientid:01:52:54:00:73:e3:e9}
I0919 16:44:00.311285   80481 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined IP address 192.168.61.43 and MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.311518   80481 main.go:141] libmachine: (functional-973448) Calling .GetSSHPort
I0919 16:44:00.311709   80481 main.go:141] libmachine: (functional-973448) Calling .GetSSHKeyPath
I0919 16:44:00.311876   80481 main.go:141] libmachine: (functional-973448) Calling .GetSSHUsername
I0919 16:44:00.312130   80481 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/functional-973448/id_rsa Username:docker}
I0919 16:44:00.412168   80481 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0919 16:44:00.463546   80481 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.463561   80481 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.463905   80481 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:00.463952   80481 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.463964   80481 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:44:00.463974   80481 main.go:141] libmachine: Making call to close driver server
I0919 16:44:00.463998   80481 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:00.464250   80481 main.go:141] libmachine: (functional-973448) DBG | Closing plugin on server side
I0919 16:44:00.464288   80481 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:00.464299   80481 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-973448 ssh pgrep buildkitd: exit status 1 (224.678761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image build -t localhost/my-image:functional-973448 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image build -t localhost/my-image:functional-973448 testdata/build --alsologtostderr: (3.254528993s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-973448 image build -t localhost/my-image:functional-973448 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 63bac54b18dd
Removing intermediate container 63bac54b18dd
---> 9fd299298c13
Step 3/3 : ADD content.txt /
---> b11a2ebe2789
Successfully built b11a2ebe2789
Successfully tagged localhost/my-image:functional-973448
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-973448 image build -t localhost/my-image:functional-973448 testdata/build --alsologtostderr:
I0919 16:44:00.730929   80588 out.go:296] Setting OutFile to fd 1 ...
I0919 16:44:00.731176   80588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.731186   80588 out.go:309] Setting ErrFile to fd 2...
I0919 16:44:00.731191   80588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0919 16:44:00.731426   80588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
I0919 16:44:00.732047   80588 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.732620   80588 config.go:182] Loaded profile config "functional-973448": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0919 16:44:00.732986   80588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.733023   80588 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.746990   80588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
I0919 16:44:00.747427   80588 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.748034   80588 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.748063   80588 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.748422   80588 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.748634   80588 main.go:141] libmachine: (functional-973448) Calling .GetState
I0919 16:44:00.750527   80588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0919 16:44:00.750571   80588 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 16:44:00.765726   80588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44511
I0919 16:44:00.766194   80588 main.go:141] libmachine: () Calling .GetVersion
I0919 16:44:00.766737   80588 main.go:141] libmachine: Using API Version  1
I0919 16:44:00.766765   80588 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 16:44:00.767149   80588 main.go:141] libmachine: () Calling .GetMachineName
I0919 16:44:00.767323   80588 main.go:141] libmachine: (functional-973448) Calling .DriverName
I0919 16:44:00.767546   80588 ssh_runner.go:195] Run: systemctl --version
I0919 16:44:00.767574   80588 main.go:141] libmachine: (functional-973448) Calling .GetSSHHostname
I0919 16:44:00.770393   80588 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.770784   80588 main.go:141] libmachine: (functional-973448) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e3:e9", ip: ""} in network mk-functional-973448: {Iface:virbr2 ExpiryTime:2023-09-19 17:40:10 +0000 UTC Type:0 Mac:52:54:00:73:e3:e9 Iaid: IPaddr:192.168.61.43 Prefix:24 Hostname:functional-973448 Clientid:01:52:54:00:73:e3:e9}
I0919 16:44:00.770829   80588 main.go:141] libmachine: (functional-973448) DBG | domain functional-973448 has defined IP address 192.168.61.43 and MAC address 52:54:00:73:e3:e9 in network mk-functional-973448
I0919 16:44:00.770973   80588 main.go:141] libmachine: (functional-973448) Calling .GetSSHPort
I0919 16:44:00.771130   80588 main.go:141] libmachine: (functional-973448) Calling .GetSSHKeyPath
I0919 16:44:00.771281   80588 main.go:141] libmachine: (functional-973448) Calling .GetSSHUsername
I0919 16:44:00.771402   80588 sshutil.go:53] new ssh client: &{IP:192.168.61.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/functional-973448/id_rsa Username:docker}
I0919 16:44:00.874130   80588 build_images.go:151] Building image from path: /tmp/build.83391932.tar
I0919 16:44:00.874203   80588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 16:44:00.891286   80588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.83391932.tar
I0919 16:44:00.897901   80588 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.83391932.tar: stat -c "%s %y" /var/lib/minikube/build/build.83391932.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.83391932.tar': No such file or directory
I0919 16:44:00.897933   80588 ssh_runner.go:362] scp /tmp/build.83391932.tar --> /var/lib/minikube/build/build.83391932.tar (3072 bytes)
I0919 16:44:00.937520   80588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.83391932
I0919 16:44:00.952482   80588 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.83391932 -xf /var/lib/minikube/build/build.83391932.tar
I0919 16:44:00.987051   80588 docker.go:339] Building image: /var/lib/minikube/build/build.83391932
I0919 16:44:00.987145   80588 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-973448 /var/lib/minikube/build/build.83391932
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0919 16:44:03.917251   80588 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-973448 /var/lib/minikube/build/build.83391932: (2.930077405s)
I0919 16:44:03.917329   80588 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.83391932
I0919 16:44:03.929638   80588 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.83391932.tar
I0919 16:44:03.941725   80588 build_images.go:207] Built localhost/my-image:functional-973448 from /tmp/build.83391932.tar
I0919 16:44:03.941756   80588 build_images.go:123] succeeded building to: functional-973448
I0919 16:44:03.941760   80588 build_images.go:124] failed building to: 
I0919 16:44:03.941783   80588 main.go:141] libmachine: Making call to close driver server
I0919 16:44:03.941792   80588 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:03.942098   80588 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:03.942117   80588 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 16:44:03.942129   80588 main.go:141] libmachine: Making call to close driver server
I0919 16:44:03.942139   80588 main.go:141] libmachine: (functional-973448) Calling .Close
I0919 16:44:03.942411   80588 main.go:141] libmachine: Successfully made call to close driver server
I0919 16:44:03.942435   80588 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.286176798s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-973448
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr
E0919 16:43:42.295828   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
2023/09/19 16:43:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr: (4.615309391s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr: (2.395692179s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.511334921s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-973448
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image load --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr: (3.804838758s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image save gcr.io/google-containers/addon-resizer:functional-973448 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image save gcr.io/google-containers/addon-resizer:functional-973448 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.729505102s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image rm gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.458485209s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-973448
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-973448 image save --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-973448 image save --daemon gcr.io/google-containers/addon-resizer:functional-973448 --alsologtostderr: (1.914300518s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-973448
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-973448
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-973448
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-973448
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (356.38s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-954748 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-954748 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m49.176706744s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-954748 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0919 17:15:51.471613   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:16:01.717761   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-954748 cache add gcr.io/k8s-minikube/gvisor-addon:2: (26.015908373s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-954748 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-954748 addons enable gvisor: (4.313546037s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9b643aad-336a-4a00-90b2-0d08e76f0ac1] Running
E0919 17:16:22.198148   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.099962323s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-954748 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [b5e9de3f-2303-485d-9cad-47cc19d0ef26] Pending
helpers_test.go:344: "nginx-gvisor" [b5e9de3f-2303-485d-9cad-47cc19d0ef26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [b5e9de3f-2303-485d-9cad-47cc19d0ef26] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 49.023203584s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-954748
E0919 17:17:20.371633   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:17:20.608145   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-954748: (1m32.693245598s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-954748 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-954748 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (58.368281905s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9b643aad-336a-4a00-90b2-0d08e76f0ac1] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.024392898s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [b5e9de3f-2303-485d-9cad-47cc19d0ef26] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.014635981s
helpers_test.go:175: Cleaning up "gvisor-954748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-954748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-954748: (1.311336141s)
--- PASS: TestGvisorAddon (356.38s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-836020 --driver=kvm2 
E0919 16:45:04.217792   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-836020 --driver=kvm2 : (51.617236943s)
--- PASS: TestImageBuild/serial/Setup (51.62s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-836020
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-836020: (1.651569327s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-836020
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-836020: (1.314086881s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-836020
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-836020
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (109.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-225902 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-225902 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m49.863049701s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (109.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons enable ingress --alsologtostderr -v=5: (14.403239171s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons enable ingress-dns --alsologtostderr -v=5
E0919 16:47:20.371785   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-225902 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-225902 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.486175488s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-225902 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-225902 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [59fca184-765c-4ded-ab23-cbec49c1b14d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [59fca184-765c-4ded-ab23-cbec49c1b14d] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.010417335s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-225902 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.50.233
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons disable ingress-dns --alsologtostderr -v=1
E0919 16:47:48.058918   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons disable ingress-dns --alsologtostderr -v=1: (10.772337678s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-225902 addons disable ingress --alsologtostderr -v=1: (7.456522645s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (105.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-836445 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0919 16:48:17.311261   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.316572   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.326880   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.347137   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.387402   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.467813   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.628266   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:17.948954   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:18.589923   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:19.870525   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:22.430788   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:27.551375   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:37.792017   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:48:58.273175   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:49:39.234827   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-836445 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m45.612996719s)
--- PASS: TestJSONOutput/start/Command (105.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-836445 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-836445 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-836445 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-836445 --output=json --user=testUser: (13.091421725s)
--- PASS: TestJSONOutput/stop/Command (13.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-864189 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-864189 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.568149ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff8b5ef2-0ab0-4fc5-9cf0-eb9364ac42f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-864189] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"340a1aa7-b27f-4ea0-9adc-3770fa74d473","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17240"}}
	{"specversion":"1.0","id":"4fe89135-0b7f-409e-9206-6614336f416c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"20b65e38-3581-4820-8a23-f97346c1d0c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig"}}
	{"specversion":"1.0","id":"0cc9334d-2065-42a0-8bef-246669bbd6ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube"}}
	{"specversion":"1.0","id":"d4f110e6-9201-4d0f-925c-908ef8a19d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"15ed6471-be4f-4d2f-abf6-f862ed138558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ff5608b7-cc31-45bc-a4c5-69e7298a088b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-864189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-864189
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (110.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-039844 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-039844 --driver=kvm2 : (53.136486827s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-043369 --driver=kvm2 
E0919 16:51:01.155808   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-043369 --driver=kvm2 : (55.179701253s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-039844
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-043369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-043369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-043369
helpers_test.go:175: Cleaning up "first-039844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-039844
--- PASS: TestMinikubeProfile (110.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-386653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0919 16:52:20.371596   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:52:20.608028   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.613293   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.623573   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.643831   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.684157   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.764526   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:20.924993   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:21.245568   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:21.886510   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:23.167095   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-386653 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.485024023s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-386653 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-386653 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-401486 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0919 16:52:25.727289   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:30.848520   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:52:41.088752   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-401486 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.668494196s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-386653 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-401486
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-401486: (2.139784099s)
--- PASS: TestMountStart/serial/Stop (2.14s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-401486
E0919 16:53:01.568993   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:53:17.311385   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-401486: (23.297039259s)
--- PASS: TestMountStart/serial/RestartStopped (24.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-401486 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0919 16:53:42.530171   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:53:44.996937   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:55:04.453490   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415589 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m13.490602421s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-415589 -- rollout status deployment/busybox: (3.027850011s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-9qfss -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-rkqh6 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-9qfss -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-rkqh6 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-9qfss -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-rkqh6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-9qfss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-9qfss -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-rkqh6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-415589 -- exec busybox-5bc68d56bd-rkqh6 -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415589 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-415589 -v 3 --alsologtostderr: (43.963769574s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.53s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp testdata/cp-test.txt multinode-415589:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589:/home/docker/cp-test.txt multinode-415589-m02:/home/docker/cp-test_multinode-415589_multinode-415589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test_multinode-415589_multinode-415589-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589:/home/docker/cp-test.txt multinode-415589-m03:/home/docker/cp-test_multinode-415589_multinode-415589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test_multinode-415589_multinode-415589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp testdata/cp-test.txt multinode-415589-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt multinode-415589:/home/docker/cp-test_multinode-415589-m02_multinode-415589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test_multinode-415589-m02_multinode-415589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m02:/home/docker/cp-test.txt multinode-415589-m03:/home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test_multinode-415589-m02_multinode-415589-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp testdata/cp-test.txt multinode-415589-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2979988656/001/cp-test_multinode-415589-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt multinode-415589:/home/docker/cp-test_multinode-415589-m03_multinode-415589.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589 "sudo cat /home/docker/cp-test_multinode-415589-m03_multinode-415589.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 cp multinode-415589-m03:/home/docker/cp-test.txt multinode-415589-m02:/home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 ssh -n multinode-415589-m02 "sudo cat /home/docker/cp-test_multinode-415589-m03_multinode-415589-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-415589 node stop m03: (3.076684786s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status: exit status 7 (435.148296ms)

                                                
                                                
-- stdout --
	multinode-415589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr: exit status 7 (443.05467ms)

                                                
                                                
-- stdout --
	multinode-415589
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415589-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415589-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 16:56:38.756461   87772 out.go:296] Setting OutFile to fd 1 ...
	I0919 16:56:38.756722   87772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:56:38.756733   87772 out.go:309] Setting ErrFile to fd 2...
	I0919 16:56:38.756737   87772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 16:56:38.756924   87772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 16:56:38.757095   87772 out.go:303] Setting JSON to false
	I0919 16:56:38.757131   87772 mustload.go:65] Loading cluster: multinode-415589
	I0919 16:56:38.757249   87772 notify.go:220] Checking for updates...
	I0919 16:56:38.757702   87772 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 16:56:38.757723   87772 status.go:255] checking status of multinode-415589 ...
	I0919 16:56:38.758268   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:38.758328   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:38.775136   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0919 16:56:38.776316   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:38.776981   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:38.777002   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:38.777341   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:38.777520   87772 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 16:56:38.779127   87772 status.go:330] multinode-415589 host status = "Running" (err=<nil>)
	I0919 16:56:38.779143   87772 host.go:66] Checking if "multinode-415589" exists ...
	I0919 16:56:38.779428   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:38.779470   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:38.794527   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
	I0919 16:56:38.794903   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:38.795347   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:38.795372   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:38.795680   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:38.795858   87772 main.go:141] libmachine: (multinode-415589) Calling .GetIP
	I0919 16:56:38.798457   87772 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:56:38.798850   87772 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:56:38.798876   87772 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:56:38.798985   87772 host.go:66] Checking if "multinode-415589" exists ...
	I0919 16:56:38.799250   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:38.799286   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:38.813721   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0919 16:56:38.814062   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:38.814478   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:38.814504   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:38.814786   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:38.814955   87772 main.go:141] libmachine: (multinode-415589) Calling .DriverName
	I0919 16:56:38.815141   87772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 16:56:38.815174   87772 main.go:141] libmachine: (multinode-415589) Calling .GetSSHHostname
	I0919 16:56:38.817812   87772 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:56:38.818224   87772 main.go:141] libmachine: (multinode-415589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:6c:54", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:53:39 +0000 UTC Type:0 Mac:52:54:00:a4:6c:54 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:multinode-415589 Clientid:01:52:54:00:a4:6c:54}
	I0919 16:56:38.818260   87772 main.go:141] libmachine: (multinode-415589) DBG | domain multinode-415589 has defined IP address 192.168.50.11 and MAC address 52:54:00:a4:6c:54 in network mk-multinode-415589
	I0919 16:56:38.818315   87772 main.go:141] libmachine: (multinode-415589) Calling .GetSSHPort
	I0919 16:56:38.818495   87772 main.go:141] libmachine: (multinode-415589) Calling .GetSSHKeyPath
	I0919 16:56:38.818653   87772 main.go:141] libmachine: (multinode-415589) Calling .GetSSHUsername
	I0919 16:56:38.818813   87772 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589/id_rsa Username:docker}
	I0919 16:56:38.915331   87772 ssh_runner.go:195] Run: systemctl --version
	I0919 16:56:38.926952   87772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:56:38.944997   87772 kubeconfig.go:92] found "multinode-415589" server: "https://192.168.50.11:8443"
	I0919 16:56:38.945034   87772 api_server.go:166] Checking apiserver status ...
	I0919 16:56:38.945077   87772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 16:56:38.957032   87772 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1917/cgroup
	I0919 16:56:38.965186   87772 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/podde462c90cfa089272f7e7f2885319010/ff647b080408dff9b687ea6170cb0449f78abb29d65d048fe7f6d11777d6dddb"
	I0919 16:56:38.965255   87772 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podde462c90cfa089272f7e7f2885319010/ff647b080408dff9b687ea6170cb0449f78abb29d65d048fe7f6d11777d6dddb/freezer.state
	I0919 16:56:38.973523   87772 api_server.go:204] freezer state: "THAWED"
	I0919 16:56:38.973546   87772 api_server.go:253] Checking apiserver healthz at https://192.168.50.11:8443/healthz ...
	I0919 16:56:38.978195   87772 api_server.go:279] https://192.168.50.11:8443/healthz returned 200:
	ok
	I0919 16:56:38.978214   87772 status.go:421] multinode-415589 apiserver status = Running (err=<nil>)
	I0919 16:56:38.978223   87772 status.go:257] multinode-415589 status: &{Name:multinode-415589 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 16:56:38.978240   87772 status.go:255] checking status of multinode-415589-m02 ...
	I0919 16:56:38.978531   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:38.978557   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:38.993048   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0919 16:56:38.993441   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:38.993953   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:38.993977   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:38.994330   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:38.994491   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetState
	I0919 16:56:38.996029   87772 status.go:330] multinode-415589-m02 host status = "Running" (err=<nil>)
	I0919 16:56:38.996044   87772 host.go:66] Checking if "multinode-415589-m02" exists ...
	I0919 16:56:38.996314   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:38.996343   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:39.010406   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40683
	I0919 16:56:39.010827   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:39.011268   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:39.011293   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:39.011594   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:39.011803   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetIP
	I0919 16:56:39.014354   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:56:39.014767   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:56:39.014806   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:56:39.014882   87772 host.go:66] Checking if "multinode-415589-m02" exists ...
	I0919 16:56:39.015219   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:39.015246   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:39.029674   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0919 16:56:39.030159   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:39.030672   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:39.030693   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:39.031062   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:39.031259   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .DriverName
	I0919 16:56:39.031498   87772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 16:56:39.031526   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHHostname
	I0919 16:56:39.034372   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:56:39.034835   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:e7:29", ip: ""} in network mk-multinode-415589: {Iface:virbr2 ExpiryTime:2023-09-19 17:54:56 +0000 UTC Type:0 Mac:52:54:00:33:e7:29 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:multinode-415589-m02 Clientid:01:52:54:00:33:e7:29}
	I0919 16:56:39.034868   87772 main.go:141] libmachine: (multinode-415589-m02) DBG | domain multinode-415589-m02 has defined IP address 192.168.50.170 and MAC address 52:54:00:33:e7:29 in network mk-multinode-415589
	I0919 16:56:39.034986   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHPort
	I0919 16:56:39.035137   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHKeyPath
	I0919 16:56:39.035261   87772 main.go:141] libmachine: (multinode-415589-m02) Calling .GetSSHUsername
	I0919 16:56:39.035369   87772 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17240-65689/.minikube/machines/multinode-415589-m02/id_rsa Username:docker}
	I0919 16:56:39.129947   87772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 16:56:39.143185   87772 status.go:257] multinode-415589-m02 status: &{Name:multinode-415589-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 16:56:39.143239   87772 status.go:255] checking status of multinode-415589-m03 ...
	I0919 16:56:39.143572   87772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 16:56:39.143625   87772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 16:56:39.158491   87772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43689
	I0919 16:56:39.158905   87772 main.go:141] libmachine: () Calling .GetVersion
	I0919 16:56:39.159360   87772 main.go:141] libmachine: Using API Version  1
	I0919 16:56:39.159383   87772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 16:56:39.159732   87772 main.go:141] libmachine: () Calling .GetMachineName
	I0919 16:56:39.159917   87772 main.go:141] libmachine: (multinode-415589-m03) Calling .GetState
	I0919 16:56:39.161432   87772 status.go:330] multinode-415589-m03 host status = "Stopped" (err=<nil>)
	I0919 16:56:39.161444   87772 status.go:343] host is not running, skipping remaining checks
	I0919 16:56:39.161449   87772 status.go:257] multinode-415589-m03 status: &{Name:multinode-415589-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (258.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415589
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-415589
E0919 16:57:20.371561   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 16:57:20.607976   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:57:48.294155   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 16:58:17.313548   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 16:58:43.421756   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-415589: (1m55.109235769s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415589 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415589 --wait=true -v=8 --alsologtostderr: (2m23.48471072s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415589
--- PASS: TestMultiNode/serial/RestartKeepsNodes (258.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-415589 node delete m03: (1.205837015s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-415589 stop: (26.141348433s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status: exit status 7 (76.57721ms)

                                                
                                                
-- stdout --
	multinode-415589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415589-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr: exit status 7 (72.216733ms)

                                                
                                                
-- stdout --
	multinode-415589
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415589-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 17:01:47.524998   89421 out.go:296] Setting OutFile to fd 1 ...
	I0919 17:01:47.525117   89421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:01:47.525126   89421 out.go:309] Setting ErrFile to fd 2...
	I0919 17:01:47.525130   89421 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0919 17:01:47.525308   89421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17240-65689/.minikube/bin
	I0919 17:01:47.525450   89421 out.go:303] Setting JSON to false
	I0919 17:01:47.525478   89421 mustload.go:65] Loading cluster: multinode-415589
	I0919 17:01:47.525624   89421 notify.go:220] Checking for updates...
	I0919 17:01:47.525900   89421 config.go:182] Loaded profile config "multinode-415589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0919 17:01:47.525916   89421 status.go:255] checking status of multinode-415589 ...
	I0919 17:01:47.526398   89421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:01:47.526493   89421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:47.540302   89421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
	I0919 17:01:47.540669   89421 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:47.541484   89421 main.go:141] libmachine: Using API Version  1
	I0919 17:01:47.541509   89421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:47.542062   89421 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:47.542683   89421 main.go:141] libmachine: (multinode-415589) Calling .GetState
	I0919 17:01:47.544271   89421 status.go:330] multinode-415589 host status = "Stopped" (err=<nil>)
	I0919 17:01:47.544290   89421 status.go:343] host is not running, skipping remaining checks
	I0919 17:01:47.544298   89421 status.go:257] multinode-415589 status: &{Name:multinode-415589 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 17:01:47.544338   89421 status.go:255] checking status of multinode-415589-m02 ...
	I0919 17:01:47.544727   89421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0919 17:01:47.544784   89421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 17:01:47.558622   89421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0919 17:01:47.559004   89421 main.go:141] libmachine: () Calling .GetVersion
	I0919 17:01:47.559467   89421 main.go:141] libmachine: Using API Version  1
	I0919 17:01:47.559498   89421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 17:01:47.559849   89421 main.go:141] libmachine: () Calling .GetMachineName
	I0919 17:01:47.560012   89421 main.go:141] libmachine: (multinode-415589-m02) Calling .GetState
	I0919 17:01:47.561792   89421 status.go:330] multinode-415589-m02 host status = "Stopped" (err=<nil>)
	I0919 17:01:47.561803   89421 status.go:343] host is not running, skipping remaining checks
	I0919 17:01:47.561808   89421 status.go:257] multinode-415589-m02 status: &{Name:multinode-415589-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (26.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (100.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415589 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0919 17:02:20.371232   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:02:20.608699   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 17:03:17.312160   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415589 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m39.810310985s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-415589 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (100.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-415589
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415589-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-415589-m02 --driver=kvm2 : exit status 14 (61.033619ms)

                                                
                                                
-- stdout --
	* [multinode-415589-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-415589-m02' is duplicated with machine name 'multinode-415589-m02' in profile 'multinode-415589'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-415589-m03 --driver=kvm2 
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-415589-m03 --driver=kvm2 : (52.056237827s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-415589
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-415589: exit status 80 (229.1126ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-415589
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-415589-m03 already exists in multinode-415589-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-415589-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.35s)

                                                
                                    
x
+
TestPreload (195.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-580479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0919 17:04:40.357568   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-580479 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m51.845999272s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-580479 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-580479 image pull gcr.io/k8s-minikube/busybox: (1.345879382s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-580479
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-580479: (13.095079179s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-580479 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0919 17:07:20.372260   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:07:20.608633   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-580479 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m7.487064163s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-580479 image list
helpers_test.go:175: Cleaning up "test-preload-580479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-580479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-580479: (1.037338379s)
--- PASS: TestPreload (195.01s)

                                                
                                    
x
+
TestSkaffold (139.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3298615749 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-182725 --memory=2600 --driver=kvm2 
E0919 17:08:43.655326   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-182725 --memory=2600 --driver=kvm2 : (49.934604758s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3298615749 run --minikube-profile skaffold-182725 --kube-context skaffold-182725 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3298615749 run --minikube-profile skaffold-182725 --kube-context skaffold-182725 --status-check=true --port-forward=false --interactive=false: (1m17.659280056s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7ddb8cdb7f-x8spt" [b4665e02-905e-4d3f-8cf2-93fcf5d5901b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016822993s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7c4c797cd7-kmg8r" [b64561ac-0a1e-40f1-a1fe-b78f809900b0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.01054083s
helpers_test.go:175: Cleaning up "skaffold-182725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-182725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-182725: (1.167967577s)
--- PASS: TestSkaffold (139.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (185.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3011322215.exe start -p running-upgrade-312008 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3011322215.exe start -p running-upgrade-312008 --memory=2200 --vm-driver=kvm2 : (1m48.481177535s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-312008 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-312008 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m14.911105605s)
helpers_test.go:175: Cleaning up "running-upgrade-312008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-312008
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-312008: (1.549301443s)
--- PASS: TestRunningBinaryUpgrade (185.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (238.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
E0919 17:12:20.371723   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:12:20.608064   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m30.812261925s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-914715
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-914715: (3.30120985s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-914715 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-914715 status --format={{.Host}}: exit status 7 (57.609516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (54.367752137s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-914715 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (71.726643ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-914715] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-914715
	    minikube start -p kubernetes-upgrade-914715 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9147152 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-914715 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-914715 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m28.681575146s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-914715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-914715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-914715: (1.245883313s)
--- PASS: TestKubernetesUpgrade (238.60s)

                                                
                                    
x
+
TestPause/serial/Start (135.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327365 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-327365 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m15.774807748s)
--- PASS: TestPause/serial/Start (135.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (63.764944ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-414188] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17240
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17240-65689/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17240-65689/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (117.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-414188 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-414188 --driver=kvm2 : (1m57.19201271s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-414188 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (117.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --driver=kvm2 : (28.203385282s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-414188 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-414188 status -o json: exit status 2 (228.929811ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-414188","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-414188
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-414188: (1.007571862s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327365 --alsologtostderr -v=1 --driver=kvm2 
E0919 17:13:17.311258   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-327365 --alsologtostderr -v=1 --driver=kvm2 : (38.413302882s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-414188 --no-kubernetes --driver=kvm2 : (30.201918758s)
--- PASS: TestNoKubernetes/serial/Start (30.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327365 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-327365 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-327365 --output=json --layout=cluster: exit status 2 (278.660419ms)

                                                
                                                
-- stdout --
	{"Name":"pause-327365","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-327365","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-327365 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327365 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-327365 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.807389358s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-414188 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-414188 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.490304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.533998774s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.829028492s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-414188
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-414188: (2.109630627s)
--- PASS: TestNoKubernetes/serial/Stop (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (92.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-414188 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-414188 --driver=kvm2 : (1m32.749291786s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (92.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-414188 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-414188 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.129673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (222.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3430018248.exe start -p stopped-upgrade-377369 --memory=2200 --vm-driver=kvm2 
E0919 17:15:41.231032   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.236418   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.246711   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.267018   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.307489   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.387888   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.548349   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:41.869010   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:42.509433   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:43.789821   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:15:46.350798   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3430018248.exe start -p stopped-upgrade-377369 --memory=2200 --vm-driver=kvm2 : (2m17.611368681s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3430018248.exe -p stopped-upgrade-377369 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3430018248.exe -p stopped-upgrade-377369 stop: (13.081497785s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-377369 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0919 17:18:17.311755   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 17:18:25.079378   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-377369 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m11.48977584s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (222.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (133.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m13.151312053s)
--- PASS: TestNetworkPlugins/group/auto/Start (133.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-377369
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-377369: (1.598194121s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (109.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m49.812226716s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (109.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (127.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m7.674843823s)
--- PASS: TestNetworkPlugins/group/calico/Start (127.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ft9k8" [cea96a80-177b-4bf1-8912-5592b39f83cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 17:21:08.920299   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ft9k8" [cea96a80-177b-4bf1-8912-5592b39f83cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.014290187s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nldqx" [d6f07236-d0b9-4c21-be88-88f959b48c4d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025352701s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m30.2875721s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vrlkf" [fe6692a3-283a-459b-976a-421e4eec81df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 17:21:17.198290   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.203588   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.213874   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.234194   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.274598   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.355154   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.516151   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:17.836827   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:21:18.477148   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vrlkf" [fe6692a3-283a-459b-976a-421e4eec81df] Running
E0919 17:21:27.439451   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.017190537s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0919 17:21:19.758003   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (89.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0919 17:21:37.680014   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m29.796321669s)
--- PASS: TestNetworkPlugins/group/false/Start (89.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (110.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0919 17:21:58.160218   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m50.281748833s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (110.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dv8th" [0d38bc19-4752-4331-9e74-bd89f39162bc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.024744319s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6vt5v" [bb04b3a2-9513-4370-88b6-f514faec65eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6vt5v" [bb04b3a2-9513-4370-88b6-f514faec65eb] Running
E0919 17:22:20.372268   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.021337076s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0919 17:22:20.608437   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m37.23599733s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fjf84" [6ba1a2d1-8ea8-479d-a861-b0e53fa0dd8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fjf84" [6ba1a2d1-8ea8-479d-a861-b0e53fa0dd8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.017062126s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kmvnx" [cea922ca-413e-4b3d-8614-276caa1c312e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kmvnx" [cea922ca-413e-4b3d-8614-276caa1c312e] Running
E0919 17:23:17.311197   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.014285699s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m23.148680418s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9jmh8" [2358c8c5-884f-4aca-a879-22a582b1c272] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9jmh8" [2358c8c5-884f-4aca-a879-22a582b1c272] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.014644828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (91.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-325204 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m31.155320474s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (91.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-367105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-367105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m44.767941971s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-njwqv" [f7cdeb7a-dff6-41f3-857f-7e96f45960ad] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021246262s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-twwzm" [a00f6fad-be00-44aa-9113-e0b09dc491bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-twwzm" [a00f6fad-be00-44aa-9113-e0b09dc491bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.016921467s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t6jgm" [dfcb4153-d1ad-441b-ae73-cba6722f85a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t6jgm" [dfcb4153-d1ad-441b-ae73-cba6722f85a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.025038709s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (1m30.503957051s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-201087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-201087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (1m35.362073196s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-325204 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-325204 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6bkmt" [fac2d764-d5ba-46ac-abf1-2ef71ed584d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6bkmt" [fac2d764-d5ba-46ac-abf1-2ef71ed584d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.013164651s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-325204 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-325204 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)
E0919 17:31:32.591811   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:31:34.986498   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:31:38.946534   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:31:51.981757   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:51.987079   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:51.997377   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:52.017678   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:52.057985   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:52.138396   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:52.298806   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:52.618983   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:53.260150   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:54.540710   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:31:57.101067   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:32:00.943720   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:32:01.742868   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:32:02.221729   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:32:03.423859   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:32:04.281065   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:32:12.462858   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-210669 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E0919 17:25:41.231338   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:26:07.302243   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.307557   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.318035   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.338409   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.378736   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.459111   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.619492   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:07.940170   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:08.580754   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:09.861484   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:11.261955   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.267262   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.277531   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.297832   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.338588   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.418739   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.579147   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:11.899995   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:12.422357   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:12.541117   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:13.822217   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:16.382481   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:26:17.197493   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:26:17.543324   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:26:21.502858   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-210669 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (1m33.095915829s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (93.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008214 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [05671727-4b19-4496-8165-354a46e7bca1] Pending
helpers_test.go:344: "busybox" [05671727-4b19-4496-8165-354a46e7bca1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 17:26:27.784385   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
helpers_test.go:344: "busybox" [05671727-4b19-4496-8165-354a46e7bca1] Running
E0919 17:26:31.743709   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.034787336s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-008214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-008214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.255826622s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-008214 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-008214 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-008214 --alsologtostderr -v=3: (13.118069768s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-201087 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aefdf2e7-be7d-46c3-8fc8-717e2297fa2b] Pending
E0919 17:26:44.890606   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
helpers_test.go:344: "busybox" [aefdf2e7-be7d-46c3-8fc8-717e2297fa2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 17:26:48.264842   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
helpers_test.go:344: "busybox" [aefdf2e7-be7d-46c3-8fc8-717e2297fa2b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.033606496s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-201087 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008214 -n no-preload-008214
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008214 -n no-preload-008214: exit status 7 (70.296675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-008214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-367105 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d284d5d-1f8d-4e81-ae0e-a092ce1f7950] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0919 17:26:52.224141   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9d284d5d-1f8d-4e81-ae0e-a092ce1f7950] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.027104827s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-367105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-008214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-008214 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (5m29.144016231s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-008214 -n no-preload-008214
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-201087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-201087 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.15738508s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-201087 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-201087 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-201087 --alsologtostderr -v=3: (13.11555043s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-367105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0919 17:27:01.743727   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:01.749012   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:01.759311   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:01.779600   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:01.819957   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:01.900401   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:02.061320   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-367105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-367105 --alsologtostderr -v=3
E0919 17:27:02.382068   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:03.022930   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:04.303696   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:06.863921   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-367105 --alsologtostderr -v=3: (13.244502083s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201087 -n embed-certs-201087
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201087 -n embed-certs-201087: exit status 7 (62.365282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-201087 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (312.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-201087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E0919 17:27:11.984431   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-201087 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (5m11.724384746s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-201087 -n embed-certs-201087
E0919 17:32:20.371187   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (312.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-210669 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fbf2ad53-5e9d-4bdf-b366-921817d9413b] Pending
helpers_test.go:344: "busybox" [fbf2ad53-5e9d-4bdf-b366-921817d9413b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fbf2ad53-5e9d-4bdf-b366-921817d9413b] Running
E0919 17:27:20.371899   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/addons-528212/client.crt: no such file or directory
E0919 17:27:20.608546   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
E0919 17:27:22.224678   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.03380194s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-210669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367105 -n old-k8s-version-367105
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367105 -n old-k8s-version-367105: exit status 7 (65.057404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-367105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (83.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-367105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-367105 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m23.381102918s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367105 -n old-k8s-version-367105
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (83.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-210669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-210669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.172198928s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-210669 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-210669 --alsologtostderr -v=3
E0919 17:27:29.225114   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:27:33.184703   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-210669 --alsologtostderr -v=3: (13.123315836s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669: exit status 7 (67.990213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-210669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-210669 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E0919 17:27:42.704884   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:27:42.968237   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:42.973498   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:42.983735   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:43.004023   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:43.044336   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:43.124659   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:43.285118   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:43.605891   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:44.246193   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:45.527226   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:48.087838   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:27:53.208243   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:28:03.448502   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:28:07.711879   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:07.717168   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:07.727459   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:07.747730   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:07.788025   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:07.868344   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:08.028770   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:08.349653   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:08.990695   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:10.271567   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:12.832650   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:17.311254   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
E0919 17:28:17.953644   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:23.665644   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:28:23.928905   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
E0919 17:28:28.194481   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:36.404415   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.409716   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.419997   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.440271   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.480611   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.560950   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:36.722025   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:37.042783   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:37.683585   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:38.964776   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-210669 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m35.398366352s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (335.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0919 17:28:41.525554   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:28:46.646149   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2dlmj" [572073a8-dd00-4043-a32f-1cf26ef4170d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0919 17:28:48.675337   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:28:51.145356   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:28:55.105669   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:28:56.886589   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2dlmj" [572073a8-dd00-4043-a32f-1cf26ef4170d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.015412957s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2dlmj" [572073a8-dd00-4043-a32f-1cf26ef4170d] Running
E0919 17:29:04.889265   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.029096468s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-367105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-367105 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367105 -n old-k8s-version-367105
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367105 -n old-k8s-version-367105: exit status 2 (257.873912ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-367105 -n old-k8s-version-367105
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-367105 -n old-k8s-version-367105: exit status 2 (248.430754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-367105 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367105 -n old-k8s-version-367105
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-367105 -n old-k8s-version-367105
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E0919 17:29:17.100689   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.105987   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.116341   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.136644   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.177007   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.257537   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.367211   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:29:17.418413   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:17.738784   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:18.379718   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:19.660481   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:22.220629   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:27.341506   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:29.636120   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:29:37.582266   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:37.625494   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.630780   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.641062   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.661361   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.701824   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.782318   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:37.942746   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:38.263813   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:38.904423   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:40.184681   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:42.745727   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:45.586137   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
E0919 17:29:47.866819   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:58.062823   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
E0919 17:29:58.108071   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:29:58.328273   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
E0919 17:30:10.669413   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.674792   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.685112   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.705512   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.745821   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.826146   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:10.986596   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:11.306718   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:11.947598   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:13.228475   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:15.789346   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:18.588784   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:30:20.910205   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:26.810163   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m15.526327214s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-173799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0919 17:30:31.150842   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-173799 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.081496786s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-173799 --alsologtostderr -v=3
E0919 17:30:39.023420   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/flannel-325204/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-173799 --alsologtostderr -v=3: (8.105204942s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173799 -n newest-cni-173799
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173799 -n newest-cni-173799: exit status 7 (57.328698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-173799 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (48.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-173799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E0919 17:30:41.231602   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/skaffold-182725/client.crt: no such file or directory
E0919 17:30:51.556482   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/false-325204/client.crt: no such file or directory
E0919 17:30:51.631588   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kubenet-325204/client.crt: no such file or directory
E0919 17:30:59.549901   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
E0919 17:31:07.301755   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/auto-325204/client.crt: no such file or directory
E0919 17:31:11.262053   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/kindnet-325204/client.crt: no such file or directory
E0919 17:31:17.198020   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/gvisor-954748/client.crt: no such file or directory
E0919 17:31:20.249413   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/enable-default-cni-325204/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-173799 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (47.75566622s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-173799 -n newest-cni-173799
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (48.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-173799 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-173799 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173799 -n newest-cni-173799
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173799 -n newest-cni-173799: exit status 2 (269.893349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173799 -n newest-cni-173799
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173799 -n newest-cni-173799: exit status 2 (247.565528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-173799 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-173799 -n newest-cni-173799
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-173799 -n newest-cni-173799
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qzfqv" [0d1bb7cc-b73a-4d5f-adbb-d000c5ff295f] Running
E0919 17:32:20.608444   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/ingress-addon-legacy-225902/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025015574s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2pcws" [47ff998a-1e9c-45cf-b85b-2f0228746f3c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0919 17:32:21.470760   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/bridge-325204/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2pcws" [47ff998a-1e9c-45cf-b85b-2f0228746f3c] Running
E0919 17:32:42.968079   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/custom-flannel-325204/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.030890978s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qzfqv" [0d1bb7cc-b73a-4d5f-adbb-d000c5ff295f] Running
E0919 17:32:29.426801   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/calico-325204/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013016654s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-201087 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-201087 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-201087 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201087 -n embed-certs-201087
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201087 -n embed-certs-201087: exit status 2 (255.062048ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201087 -n embed-certs-201087
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201087 -n embed-certs-201087: exit status 2 (253.593625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-201087 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-201087 -n embed-certs-201087
E0919 17:32:32.943801   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-201087 -n embed-certs-201087
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2pcws" [47ff998a-1e9c-45cf-b85b-2f0228746f3c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011899657s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-008214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-008214 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-008214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008214 -n no-preload-008214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008214 -n no-preload-008214: exit status 2 (226.932374ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008214 -n no-preload-008214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008214 -n no-preload-008214: exit status 2 (233.026821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-008214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-008214 -n no-preload-008214
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-008214 -n no-preload-008214
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dvz5w" [2ea80d74-4ae5-4197-9a54-2d2d7db5eca1] Running
E0919 17:33:13.904581   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/old-k8s-version-367105/client.crt: no such file or directory
E0919 17:33:17.311914   73397 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17240-65689/.minikube/profiles/functional-973448/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027531698s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dvz5w" [2ea80d74-4ae5-4197-9a54-2d2d7db5eca1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011859893s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-210669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-210669 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-210669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669: exit status 2 (236.672292ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669: exit status 2 (239.85213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-210669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-210669 -n default-k8s-diff-port-210669
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                    

Test skip (31/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-325204 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-325204" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-325204

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-325204" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-325204"

                                                
                                                
----------------------- debugLogs end: cilium-325204 [took: 3.392582129s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-325204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-325204
--- SKIP: TestNetworkPlugins/group/cilium (3.57s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-021123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-021123
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard