Test Report: KVM_Linux 15909

                    
                      c3ced9e44b664dea818a5c37f69b411b40c816d1:2023-02-24:28040
                    
                

Test fail (2/300)

Order failed test Duration
202 TestMultiNode/serial/StartAfterStop 20.8
245 TestPause/serial/SecondStartNoReconfiguration 77.64
x
+
TestMultiNode/serial/StartAfterStop (20.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr
E0224 01:03:40.892558   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
multinode_test.go:252: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr: exit status 90 (18.189438331s)

                                                
                                                
-- stdout --
	* Starting worker node multinode-858631-m03 in cluster multinode-858631
	* Restarting existing kvm2 VM for "multinode-858631-m03" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 01:03:32.260717   24114 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:03:32.260942   24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:03:32.260964   24114 out.go:309] Setting ErrFile to fd 2...
	I0224 01:03:32.260971   24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:03:32.261443   24114 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:03:32.261806   24114 mustload.go:65] Loading cluster: multinode-858631
	I0224 01:03:32.262145   24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:03:32.262456   24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.262493   24114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.277174   24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0224 01:03:32.277645   24114 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.278156   24114 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.278177   24114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.278511   24114 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.278690   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
	W0224 01:03:32.280169   24114 host.go:58] "multinode-858631-m03" host status: Stopped
	I0224 01:03:32.282511   24114 out.go:177] * Starting worker node multinode-858631-m03 in cluster multinode-858631
	I0224 01:03:32.283904   24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:03:32.283939   24114 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 01:03:32.283950   24114 cache.go:57] Caching tarball of preloaded images
	I0224 01:03:32.284043   24114 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:03:32.284058   24114 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 01:03:32.284190   24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:03:32.284400   24114 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:03:32.284445   24114 start.go:364] acquiring machines lock for multinode-858631-m03: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:03:32.284513   24114 start.go:368] acquired machines lock for "multinode-858631-m03" in 36.132µs
	I0224 01:03:32.284537   24114 start.go:96] Skipping create...Using existing machine configuration
	I0224 01:03:32.284549   24114 fix.go:55] fixHost starting: m03
	I0224 01:03:32.284825   24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.284858   24114 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.298672   24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0224 01:03:32.299062   24114 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.299472   24114 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.299496   24114 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.299859   24114 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.300041   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:32.300197   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
	I0224 01:03:32.301607   24114 fix.go:103] recreateIfNeeded on multinode-858631-m03: state=Stopped err=<nil>
	I0224 01:03:32.301631   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	W0224 01:03:32.301795   24114 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 01:03:32.303890   24114 out.go:177] * Restarting existing kvm2 VM for "multinode-858631-m03" ...
	I0224 01:03:32.305217   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .Start
	I0224 01:03:32.305402   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring networks are active...
	I0224 01:03:32.306146   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network default is active
	I0224 01:03:32.306514   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network mk-multinode-858631 is active
	I0224 01:03:32.306956   24114 main.go:141] libmachine: (multinode-858631-m03) Getting domain xml...
	I0224 01:03:32.307642   24114 main.go:141] libmachine: (multinode-858631-m03) Creating domain...
	I0224 01:03:33.535122   24114 main.go:141] libmachine: (multinode-858631-m03) Waiting to get IP...
	I0224 01:03:33.536096   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:33.536497   24114 main.go:141] libmachine: (multinode-858631-m03) Found IP for machine: 192.168.39.240
	I0224 01:03:33.536523   24114 main.go:141] libmachine: (multinode-858631-m03) Reserving static IP address...
	I0224 01:03:33.536541   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:33.537088   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:33.537115   24114 main.go:141] libmachine: (multinode-858631-m03) Reserved static IP address: 192.168.39.240
	I0224 01:03:33.537132   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | skip adding static IP to network mk-multinode-858631 - found existing host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"}
	I0224 01:03:33.537149   24114 main.go:141] libmachine: (multinode-858631-m03) Waiting for SSH to be available...
	I0224 01:03:33.537176   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Getting to WaitForSSH function...
	I0224 01:03:33.539150   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:33.539490   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:33.539546   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:33.539598   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH client type: external
	I0224 01:03:33.539625   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa (-rw-------)
	I0224 01:03:33.539663   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:03:33.539685   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | About to run SSH command:
	I0224 01:03:33.539701   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | exit 0
	I0224 01:03:45.637148   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | SSH cmd err, output: <nil>: 
	I0224 01:03:45.637538   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetConfigRaw
	I0224 01:03:45.638256   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
	I0224 01:03:45.640589   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.640953   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:45.640999   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.641231   24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:03:45.641390   24114 machine.go:88] provisioning docker machine ...
	I0224 01:03:45.641406   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:45.641606   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
	I0224 01:03:45.641772   24114 buildroot.go:166] provisioning hostname "multinode-858631-m03"
	I0224 01:03:45.641789   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
	I0224 01:03:45.641914   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:45.644042   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.644382   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:45.644408   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.644536   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:45.644688   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:45.644813   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:45.644897   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:45.645054   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:45.645540   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:45.645558   24114 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-858631-m03 && echo "multinode-858631-m03" | sudo tee /etc/hostname
	I0224 01:03:45.781731   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m03
	
	I0224 01:03:45.781763   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:45.784434   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.784871   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:45.784902   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.785048   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:45.785217   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:45.785340   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:45.785461   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:45.785613   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:45.786019   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:45.786037   24114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-858631-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-858631-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:03:45.909630   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:03:45.909671   24114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:03:45.909708   24114 buildroot.go:174] setting up certificates
	I0224 01:03:45.909717   24114 provision.go:83] configureAuth start
	I0224 01:03:45.909726   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
	I0224 01:03:45.909926   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
	I0224 01:03:45.912640   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.912983   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:45.913005   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.913176   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:45.915338   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.915697   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:45.915738   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:45.915804   24114 provision.go:138] copyHostCerts
	I0224 01:03:45.915872   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:03:45.915893   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:03:45.915970   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:03:45.916080   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:03:45.916091   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:03:45.916128   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:03:45.916214   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:03:45.916224   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:03:45.916256   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:03:45.916320   24114 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m03 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-858631-m03]
	I0224 01:03:46.139019   24114 provision.go:172] copyRemoteCerts
	I0224 01:03:46.139085   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:03:46.139111   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:46.141414   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.141764   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:46.141802   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.142019   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:46.142249   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.142398   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:46.142564   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
	I0224 01:03:46.230066   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 01:03:46.252894   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 01:03:46.275597   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:03:46.297736   24114 provision.go:86] duration metric: configureAuth took 388.009002ms
	I0224 01:03:46.297760   24114 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:03:46.297955   24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:03:46.297975   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:46.298205   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:46.300343   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.300707   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:46.300726   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.300874   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:46.300999   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.301111   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.301213   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:46.301379   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:46.301823   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:46.301836   24114 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:03:46.419110   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:03:46.419136   24114 buildroot.go:70] root file system type: tmpfs
	I0224 01:03:46.419278   24114 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:03:46.419300   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:46.421650   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.422035   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:46.422074   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.422208   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:46.422385   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.422503   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.422600   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:46.422780   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:46.423174   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:46.423237   24114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:03:46.549929   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:03:46.549963   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:46.552418   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.552729   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:46.552756   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:46.552886   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:46.553084   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.553255   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:46.553414   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:46.553596   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:46.554026   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:46.554049   24114 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:03:47.346622   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0224 01:03:47.346647   24114 machine.go:91] provisioned docker machine in 1.705245446s
	I0224 01:03:47.346658   24114 start.go:300] post-start starting for "multinode-858631-m03" (driver="kvm2")
	I0224 01:03:47.346666   24114 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:03:47.346689   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:47.346962   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:03:47.346988   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:47.349581   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.349992   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:47.350017   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.350172   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:47.350362   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:47.350549   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:47.350700   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
	I0224 01:03:47.439110   24114 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:03:47.443190   24114 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:03:47.443208   24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:03:47.443272   24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:03:47.443345   24114 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:03:47.443425   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:03:47.451605   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:03:47.477354   24114 start.go:303] post-start completed in 130.684659ms
	I0224 01:03:47.477374   24114 fix.go:57] fixHost completed within 15.192824116s
	I0224 01:03:47.477397   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:47.480041   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.480408   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:47.480432   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.480585   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:47.480775   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:47.480910   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:47.481050   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:47.481200   24114 main.go:141] libmachine: Using SSH client type: native
	I0224 01:03:47.481636   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0224 01:03:47.481650   24114 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 01:03:47.598085   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200627.547433994
	
	I0224 01:03:47.598107   24114 fix.go:207] guest clock: 1677200627.547433994
	I0224 01:03:47.598117   24114 fix.go:220] Guest: 2023-02-24 01:03:47.547433994 +0000 UTC Remote: 2023-02-24 01:03:47.477378328 +0000 UTC m=+15.254967977 (delta=70.055666ms)
	I0224 01:03:47.598162   24114 fix.go:191] guest clock delta is within tolerance: 70.055666ms
	I0224 01:03:47.598169   24114 start.go:83] releasing machines lock for "multinode-858631-m03", held for 15.313644124s
	I0224 01:03:47.598196   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:47.598466   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
	I0224 01:03:47.601108   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.601449   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:47.601500   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.601635   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:47.602113   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:47.602297   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
	I0224 01:03:47.602410   24114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:03:47.602454   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:47.602579   24114 ssh_runner.go:195] Run: systemctl --version
	I0224 01:03:47.602608   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
	I0224 01:03:47.604986   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.605313   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:47.605341   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.605437   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.605620   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:47.605810   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:47.605871   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
	I0224 01:03:47.605901   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
	I0224 01:03:47.605962   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:47.606130   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
	I0224 01:03:47.606155   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
	I0224 01:03:47.606285   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
	I0224 01:03:47.606403   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
	I0224 01:03:47.606501   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
	I0224 01:03:47.698434   24114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 01:03:47.720616   24114 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:03:47.720682   24114 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:03:47.724288   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:03:47.733533   24114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:03:47.749554   24114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:03:47.765014   24114 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 01:03:47.765035   24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:03:47.765118   24114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:03:47.791918   24114 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:03:47.791947   24114 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:03:47.791955   24114 start.go:485] detecting cgroup driver to use...
	I0224 01:03:47.792040   24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:03:47.809788   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:03:47.819164   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:03:47.829096   24114 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:03:47.829135   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:03:47.839183   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:03:47.849277   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:03:47.859033   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:03:47.869193   24114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:03:47.879734   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:03:47.890162   24114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:03:47.899715   24114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:03:47.908689   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:03:48.018034   24114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:03:48.035589   24114 start.go:485] detecting cgroup driver to use...
	I0224 01:03:48.035666   24114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:03:48.052927   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:03:48.073823   24114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:03:48.093543   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:03:48.106104   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:03:48.118539   24114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0224 01:03:48.148341   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:03:48.160647   24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:03:48.180346   24114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:03:48.302273   24114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:03:48.409590   24114 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:03:48.409616   24114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:03:48.426383   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:03:48.530953   24114 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:03:49.936371   24114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405376904s)
	I0224 01:03:49.936434   24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:03:50.053226   24114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:03:50.173114   24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:03:50.268183   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:03:50.380907   24114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:03:50.398249   24114 out.go:177] 
	W0224 01:03:50.399656   24114 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0224 01:03:50.399672   24114 out.go:239] * 
	* 
	W0224 01:03:50.402548   24114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 01:03:50.403866   24114 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:254: I0224 01:03:32.260717   24114 out.go:296] Setting OutFile to fd 1 ...
I0224 01:03:32.260942   24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.260964   24114 out.go:309] Setting ErrFile to fd 2...
I0224 01:03:32.260971   24114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0224 01:03:32.261443   24114 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
I0224 01:03:32.261806   24114 mustload.go:65] Loading cluster: multinode-858631
I0224 01:03:32.262145   24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:32.262456   24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.262493   24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.277174   24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
I0224 01:03:32.277645   24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.278156   24114 main.go:141] libmachine: Using API Version  1
I0224 01:03:32.278177   24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.278511   24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.278690   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
W0224 01:03:32.280169   24114 host.go:58] "multinode-858631-m03" host status: Stopped
I0224 01:03:32.282511   24114 out.go:177] * Starting worker node multinode-858631-m03 in cluster multinode-858631
I0224 01:03:32.283904   24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:32.283939   24114 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
I0224 01:03:32.283950   24114 cache.go:57] Caching tarball of preloaded images
I0224 01:03:32.284043   24114 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0224 01:03:32.284058   24114 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
I0224 01:03:32.284190   24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:32.284400   24114 cache.go:193] Successfully downloaded all kic artifacts
I0224 01:03:32.284445   24114 start.go:364] acquiring machines lock for multinode-858631-m03: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0224 01:03:32.284513   24114 start.go:368] acquired machines lock for "multinode-858631-m03" in 36.132µs
I0224 01:03:32.284537   24114 start.go:96] Skipping create...Using existing machine configuration
I0224 01:03:32.284549   24114 fix.go:55] fixHost starting: m03
I0224 01:03:32.284825   24114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0224 01:03:32.284858   24114 main.go:141] libmachine: Launching plugin server for driver kvm2
I0224 01:03:32.298672   24114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
I0224 01:03:32.299062   24114 main.go:141] libmachine: () Calling .GetVersion
I0224 01:03:32.299472   24114 main.go:141] libmachine: Using API Version  1
I0224 01:03:32.299496   24114 main.go:141] libmachine: () Calling .SetConfigRaw
I0224 01:03:32.299859   24114 main.go:141] libmachine: () Calling .GetMachineName
I0224 01:03:32.300041   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:32.300197   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
I0224 01:03:32.301607   24114 fix.go:103] recreateIfNeeded on multinode-858631-m03: state=Stopped err=<nil>
I0224 01:03:32.301631   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
W0224 01:03:32.301795   24114 fix.go:129] unexpected machine state, will restart: <nil>
I0224 01:03:32.303890   24114 out.go:177] * Restarting existing kvm2 VM for "multinode-858631-m03" ...
I0224 01:03:32.305217   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .Start
I0224 01:03:32.305402   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring networks are active...
I0224 01:03:32.306146   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network default is active
I0224 01:03:32.306514   24114 main.go:141] libmachine: (multinode-858631-m03) Ensuring network mk-multinode-858631 is active
I0224 01:03:32.306956   24114 main.go:141] libmachine: (multinode-858631-m03) Getting domain xml...
I0224 01:03:32.307642   24114 main.go:141] libmachine: (multinode-858631-m03) Creating domain...
I0224 01:03:33.535122   24114 main.go:141] libmachine: (multinode-858631-m03) Waiting to get IP...
I0224 01:03:33.536096   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.536497   24114 main.go:141] libmachine: (multinode-858631-m03) Found IP for machine: 192.168.39.240
I0224 01:03:33.536523   24114 main.go:141] libmachine: (multinode-858631-m03) Reserving static IP address...
I0224 01:03:33.536541   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has current primary IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.537088   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.537115   24114 main.go:141] libmachine: (multinode-858631-m03) Reserved static IP address: 192.168.39.240
I0224 01:03:33.537132   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | skip adding static IP to network mk-multinode-858631 - found existing host DHCP lease matching {name: "multinode-858631-m03", mac: "52:54:00:71:f9:c5", ip: "192.168.39.240"}
I0224 01:03:33.537149   24114 main.go:141] libmachine: (multinode-858631-m03) Waiting for SSH to be available...
I0224 01:03:33.537176   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Getting to WaitForSSH function...
I0224 01:03:33.539150   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539490   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:02:40 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:33.539546   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:33.539598   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH client type: external
I0224 01:03:33.539625   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa (-rw-------)
I0224 01:03:33.539663   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
I0224 01:03:33.539685   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | About to run SSH command:
I0224 01:03:33.539701   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | exit 0
I0224 01:03:45.637148   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | SSH cmd err, output: <nil>: 
I0224 01:03:45.637538   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetConfigRaw
I0224 01:03:45.638256   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.640589   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.640953   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.640999   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.641231   24114 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
I0224 01:03:45.641390   24114 machine.go:88] provisioning docker machine ...
I0224 01:03:45.641406   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:45.641606   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641772   24114 buildroot.go:166] provisioning hostname "multinode-858631-m03"
I0224 01:03:45.641789   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.641914   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.644042   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644382   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.644408   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.644536   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.644688   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644813   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.644897   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.645054   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.645540   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.645558   24114 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-858631-m03 && echo "multinode-858631-m03" | sudo tee /etc/hostname
I0224 01:03:45.781731   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m03

                                                
                                                
I0224 01:03:45.781763   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.784434   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.784871   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.784902   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.785048   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:45.785217   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785340   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:45.785461   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:45.785613   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:45.786019   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:45.786037   24114 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-858631-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-858631-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0224 01:03:45.909630   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0224 01:03:45.909671   24114 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
I0224 01:03:45.909708   24114 buildroot.go:174] setting up certificates
I0224 01:03:45.909717   24114 provision.go:83] configureAuth start
I0224 01:03:45.909726   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetMachineName
I0224 01:03:45.909926   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:45.912640   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.912983   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.913005   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.913176   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:45.915338   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915697   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:45.915738   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:45.915804   24114 provision.go:138] copyHostCerts
I0224 01:03:45.915872   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
I0224 01:03:45.915893   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
I0224 01:03:45.915970   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
I0224 01:03:45.916080   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
I0224 01:03:45.916091   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
I0224 01:03:45.916128   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
I0224 01:03:45.916214   24114 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
I0224 01:03:45.916224   24114 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
I0224 01:03:45.916256   24114 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
I0224 01:03:45.916320   24114 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m03 san=[192.168.39.240 192.168.39.240 localhost 127.0.0.1 minikube multinode-858631-m03]
I0224 01:03:46.139019   24114 provision.go:172] copyRemoteCerts
I0224 01:03:46.139085   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0224 01:03:46.139111   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.141414   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.141764   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.141802   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.142019   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.142249   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.142398   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.142564   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:46.230066   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
I0224 01:03:46.252894   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0224 01:03:46.275597   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0224 01:03:46.297736   24114 provision.go:86] duration metric: configureAuth took 388.009002ms
I0224 01:03:46.297760   24114 buildroot.go:189] setting minikube options for container-runtime
I0224 01:03:46.297955   24114 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
I0224 01:03:46.297975   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:46.298205   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.300343   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300707   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.300726   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.300874   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.300999   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301111   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.301213   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.301379   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.301823   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.301836   24114 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0224 01:03:46.419110   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0224 01:03:46.419136   24114 buildroot.go:70] root file system type: tmpfs
I0224 01:03:46.419278   24114 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0224 01:03:46.419300   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.421650   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422035   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.422074   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.422208   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.422385   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422503   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.422600   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.422780   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.423174   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.423237   24114 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0224 01:03:46.549929   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0224 01:03:46.549963   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:46.552418   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552729   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:46.552756   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:46.552886   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:46.553084   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553255   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:46.553414   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:46.553596   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:46.554026   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:46.554049   24114 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0224 01:03:47.346622   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0224 01:03:47.346647   24114 machine.go:91] provisioned docker machine in 1.705245446s
I0224 01:03:47.346658   24114 start.go:300] post-start starting for "multinode-858631-m03" (driver="kvm2")
I0224 01:03:47.346666   24114 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0224 01:03:47.346689   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.346962   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0224 01:03:47.346988   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.349581   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.349992   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.350017   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.350172   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.350362   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.350549   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.350700   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.439110   24114 ssh_runner.go:195] Run: cat /etc/os-release
I0224 01:03:47.443190   24114 info.go:137] Remote host: Buildroot 2021.02.12
I0224 01:03:47.443208   24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
I0224 01:03:47.443272   24114 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
I0224 01:03:47.443345   24114 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
I0224 01:03:47.443425   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0224 01:03:47.451605   24114 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
I0224 01:03:47.477354   24114 start.go:303] post-start completed in 130.684659ms
I0224 01:03:47.477374   24114 fix.go:57] fixHost completed within 15.192824116s
I0224 01:03:47.477397   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.480041   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480408   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.480432   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.480585   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.480775   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.480910   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.481050   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.481200   24114 main.go:141] libmachine: Using SSH client type: native
I0224 01:03:47.481636   24114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
I0224 01:03:47.481650   24114 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0224 01:03:47.598085   24114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200627.547433994

                                                
                                                
I0224 01:03:47.598107   24114 fix.go:207] guest clock: 1677200627.547433994
I0224 01:03:47.598117   24114 fix.go:220] Guest: 2023-02-24 01:03:47.547433994 +0000 UTC Remote: 2023-02-24 01:03:47.477378328 +0000 UTC m=+15.254967977 (delta=70.055666ms)
I0224 01:03:47.598162   24114 fix.go:191] guest clock delta is within tolerance: 70.055666ms
I0224 01:03:47.598169   24114 start.go:83] releasing machines lock for "multinode-858631-m03", held for 15.313644124s
I0224 01:03:47.598196   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.598466   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetIP
I0224 01:03:47.601108   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601449   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.601500   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.601635   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602113   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602297   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .DriverName
I0224 01:03:47.602410   24114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0224 01:03:47.602454   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.602579   24114 ssh_runner.go:195] Run: systemctl --version
I0224 01:03:47.602608   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHHostname
I0224 01:03:47.604986   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605313   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605341   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605437   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605620   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.605810   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.605871   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:f9:c5", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:03:44 +0000 UTC Type:0 Mac:52:54:00:71:f9:c5 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-858631-m03 Clientid:01:52:54:00:71:f9:c5}
I0224 01:03:47.605901   24114 main.go:141] libmachine: (multinode-858631-m03) DBG | domain multinode-858631-m03 has defined IP address 192.168.39.240 and MAC address 52:54:00:71:f9:c5 in network mk-multinode-858631
I0224 01:03:47.605962   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606130   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.606155   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHPort
I0224 01:03:47.606285   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHKeyPath
I0224 01:03:47.606403   24114 main.go:141] libmachine: (multinode-858631-m03) Calling .GetSSHUsername
I0224 01:03:47.606501   24114 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m03/id_rsa Username:docker}
I0224 01:03:47.698434   24114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0224 01:03:47.720616   24114 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0224 01:03:47.720682   24114 ssh_runner.go:195] Run: which cri-dockerd
I0224 01:03:47.724288   24114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0224 01:03:47.733533   24114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0224 01:03:47.749554   24114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0224 01:03:47.765014   24114 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0224 01:03:47.765035   24114 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
I0224 01:03:47.765118   24114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0224 01:03:47.791918   24114 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
kindest/kindnetd:v20221004-44d545d1
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5

                                                
                                                
-- /stdout --
I0224 01:03:47.791947   24114 docker.go:560] Images already preloaded, skipping extraction
I0224 01:03:47.791955   24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:47.792040   24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:47.809788   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0224 01:03:47.819164   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0224 01:03:47.829096   24114 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0224 01:03:47.829135   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0224 01:03:47.839183   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.849277   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0224 01:03:47.859033   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0224 01:03:47.869193   24114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0224 01:03:47.879734   24114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0224 01:03:47.890162   24114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0224 01:03:47.899715   24114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0224 01:03:47.908689   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.018034   24114 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0224 01:03:48.035589   24114 start.go:485] detecting cgroup driver to use...
I0224 01:03:48.035666   24114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0224 01:03:48.052927   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.073823   24114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0224 01:03:48.093543   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0224 01:03:48.106104   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.118539   24114 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0224 01:03:48.148341   24114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0224 01:03:48.160647   24114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0224 01:03:48.180346   24114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0224 01:03:48.302273   24114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0224 01:03:48.409590   24114 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0224 01:03:48.409616   24114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0224 01:03:48.426383   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:48.530953   24114 ssh_runner.go:195] Run: sudo systemctl restart docker
I0224 01:03:49.936371   24114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.405376904s)
I0224 01:03:49.936434   24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.053226   24114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0224 01:03:50.173114   24114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0224 01:03:50.268183   24114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0224 01:03:50.380907   24114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0224 01:03:50.398249   24114 out.go:177] 
W0224 01:03:50.399656   24114 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
W0224 01:03:50.399672   24114 out.go:239] * 
* 
W0224 01:03:50.402548   24114 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0224 01:03:50.403866   24114 out.go:177] 
multinode_test.go:255: node start returned an error. args "out/minikube-linux-amd64 -p multinode-858631 node start m03 --alsologtostderr": exit status 90
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status
multinode_test.go:259: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status: exit status 2 (552.605592ms)

                                                
                                                
-- stdout --
	multinode-858631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-858631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-858631-m03
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-858631 status" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-858631 -n multinode-858631
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-858631 logs -n 25: (1.137769573s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-858631 cp multinode-858631:/home/docker/cp-test.txt                           | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03:/home/docker/cp-test_multinode-858631_multinode-858631-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n multinode-858631-m03 sudo cat                                   | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /home/docker/cp-test_multinode-858631_multinode-858631-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp testdata/cp-test.txt                                                | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631:/home/docker/cp-test_multinode-858631-m02_multinode-858631.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n multinode-858631 sudo cat                                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /home/docker/cp-test_multinode-858631-m02_multinode-858631.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03:/home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n multinode-858631-m03 sudo cat                                   | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp testdata/cp-test.txt                                                | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631:/home/docker/cp-test_multinode-858631-m03_multinode-858631.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n multinode-858631 sudo cat                                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /home/docker/cp-test_multinode-858631-m03_multinode-858631.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt                       | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m02:/home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n                                                                 | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | multinode-858631-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-858631 ssh -n multinode-858631-m02 sudo cat                                   | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	|         | /home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-858631 node stop m03                                                          | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC | 24 Feb 23 01:03 UTC |
	| node    | multinode-858631 node start                                                             | multinode-858631 | jenkins | v1.29.0 | 24 Feb 23 01:03 UTC |                     |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 01:00:07
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 01:00:07.922860   21922 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:00:07.923056   21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:00:07.923066   21922 out.go:309] Setting ErrFile to fd 2...
	I0224 01:00:07.923073   21922 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:00:07.923190   21922 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:00:07.923759   21922 out.go:303] Setting JSON to false
	I0224 01:00:07.924632   21922 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2557,"bootTime":1677197851,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 01:00:07.924691   21922 start.go:135] virtualization: kvm guest
	I0224 01:00:07.927314   21922 out.go:177] * [multinode-858631] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 01:00:07.929106   21922 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 01:00:07.929051   21922 notify.go:220] Checking for updates...
	I0224 01:00:07.930542   21922 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 01:00:07.932177   21922 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:00:07.933715   21922 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:00:07.935104   21922 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 01:00:07.936519   21922 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 01:00:07.937943   21922 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 01:00:07.972305   21922 out.go:177] * Using the kvm2 driver based on user configuration
	I0224 01:00:07.973594   21922 start.go:296] selected driver: kvm2
	I0224 01:00:07.973608   21922 start.go:857] validating driver "kvm2" against <nil>
	I0224 01:00:07.973618   21922 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 01:00:07.974205   21922 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:00:07.974270   21922 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 01:00:07.988124   21922 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0224 01:00:07.988170   21922 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 01:00:07.988380   21922 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 01:00:07.988411   21922 cni.go:84] Creating CNI manager for ""
	I0224 01:00:07.988423   21922 cni.go:136] 0 nodes found, recommending kindnet
	I0224 01:00:07.988433   21922 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 01:00:07.988452   21922 start_flags.go:319] config:
	{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:00:07.988547   21922 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:00:07.990401   21922 out.go:177] * Starting control plane node multinode-858631 in cluster multinode-858631
	I0224 01:00:07.991675   21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:00:07.991700   21922 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 01:00:07.991715   21922 cache.go:57] Caching tarball of preloaded images
	I0224 01:00:07.991784   21922 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:00:07.991794   21922 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 01:00:07.992091   21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:00:07.992110   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json: {Name:mkc2f0838e41fb815d83b476363d0d2dba762f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:07.992221   21922 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:00:07.992241   21922 start.go:364] acquiring machines lock for multinode-858631: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:00:07.992265   21922 start.go:368] acquired machines lock for "multinode-858631" in 14.484µs
	I0224 01:00:07.992282   21922 start.go:93] Provisioning new machine with config: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 01:00:07.992341   21922 start.go:125] createHost starting for "" (driver="kvm2")
	I0224 01:00:07.994145   21922 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0224 01:00:07.994259   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:00:07.994296   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:00:08.007604   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0224 01:00:08.008010   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:00:08.009823   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:00:08.009850   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:00:08.010160   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:00:08.010346   21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
	I0224 01:00:08.010472   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:08.010601   21922 start.go:159] libmachine.API.Create for "multinode-858631" (driver="kvm2")
	I0224 01:00:08.010626   21922 client.go:168] LocalClient.Create starting
	I0224 01:00:08.010656   21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem
	I0224 01:00:08.010682   21922 main.go:141] libmachine: Decoding PEM data...
	I0224 01:00:08.010697   21922 main.go:141] libmachine: Parsing certificate...
	I0224 01:00:08.010742   21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem
	I0224 01:00:08.010761   21922 main.go:141] libmachine: Decoding PEM data...
	I0224 01:00:08.010777   21922 main.go:141] libmachine: Parsing certificate...
	I0224 01:00:08.010794   21922 main.go:141] libmachine: Running pre-create checks...
	I0224 01:00:08.010803   21922 main.go:141] libmachine: (multinode-858631) Calling .PreCreateCheck
	I0224 01:00:08.011056   21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
	I0224 01:00:08.011423   21922 main.go:141] libmachine: Creating machine...
	I0224 01:00:08.011437   21922 main.go:141] libmachine: (multinode-858631) Calling .Create
	I0224 01:00:08.011546   21922 main.go:141] libmachine: (multinode-858631) Creating KVM machine...
	I0224 01:00:08.012611   21922 main.go:141] libmachine: (multinode-858631) DBG | found existing default KVM network
	I0224 01:00:08.013199   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.013088   21944 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029240}
	I0224 01:00:08.018125   21922 main.go:141] libmachine: (multinode-858631) DBG | trying to create private KVM network mk-multinode-858631 192.168.39.0/24...
	I0224 01:00:08.082681   21922 main.go:141] libmachine: (multinode-858631) DBG | private KVM network mk-multinode-858631 192.168.39.0/24 created
	I0224 01:00:08.082717   21922 main.go:141] libmachine: (multinode-858631) Setting up store path in /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 ...
	I0224 01:00:08.082733   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.082648   21944 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:00:08.082754   21922 main.go:141] libmachine: (multinode-858631) Building disk image from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
	I0224 01:00:08.082824   21922 main.go:141] libmachine: (multinode-858631) Downloading /home/jenkins/minikube-integration/15909-4074/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0224 01:00:08.280134   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.280031   21944 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa...
	I0224 01:00:08.321431   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.321350   21944 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/multinode-858631.rawdisk...
	I0224 01:00:08.321458   21922 main.go:141] libmachine: (multinode-858631) DBG | Writing magic tar header
	I0224 01:00:08.321486   21922 main.go:141] libmachine: (multinode-858631) DBG | Writing SSH key tar header
	I0224 01:00:08.321556   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:08.321463   21944 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 ...
	I0224 01:00:08.321580   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631
	I0224 01:00:08.321600   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines
	I0224 01:00:08.321630   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:00:08.321646   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074
	I0224 01:00:08.321663   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631 (perms=drwx------)
	I0224 01:00:08.321679   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0224 01:00:08.321695   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines (perms=drwxrwxr-x)
	I0224 01:00:08.321709   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube (perms=drwxr-xr-x)
	I0224 01:00:08.321724   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074 (perms=drwxrwxr-x)
	I0224 01:00:08.321740   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 01:00:08.321755   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home/jenkins
	I0224 01:00:08.321772   21922 main.go:141] libmachine: (multinode-858631) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 01:00:08.321782   21922 main.go:141] libmachine: (multinode-858631) DBG | Checking permissions on dir: /home
	I0224 01:00:08.321796   21922 main.go:141] libmachine: (multinode-858631) DBG | Skipping /home - not owner
	I0224 01:00:08.321810   21922 main.go:141] libmachine: (multinode-858631) Creating domain...
	I0224 01:00:08.322759   21922 main.go:141] libmachine: (multinode-858631) define libvirt domain using xml: 
	I0224 01:00:08.322783   21922 main.go:141] libmachine: (multinode-858631) <domain type='kvm'>
	I0224 01:00:08.322792   21922 main.go:141] libmachine: (multinode-858631)   <name>multinode-858631</name>
	I0224 01:00:08.322801   21922 main.go:141] libmachine: (multinode-858631)   <memory unit='MiB'>2200</memory>
	I0224 01:00:08.322810   21922 main.go:141] libmachine: (multinode-858631)   <vcpu>2</vcpu>
	I0224 01:00:08.322817   21922 main.go:141] libmachine: (multinode-858631)   <features>
	I0224 01:00:08.322826   21922 main.go:141] libmachine: (multinode-858631)     <acpi/>
	I0224 01:00:08.322831   21922 main.go:141] libmachine: (multinode-858631)     <apic/>
	I0224 01:00:08.322839   21922 main.go:141] libmachine: (multinode-858631)     <pae/>
	I0224 01:00:08.322849   21922 main.go:141] libmachine: (multinode-858631)     
	I0224 01:00:08.322859   21922 main.go:141] libmachine: (multinode-858631)   </features>
	I0224 01:00:08.322872   21922 main.go:141] libmachine: (multinode-858631)   <cpu mode='host-passthrough'>
	I0224 01:00:08.322880   21922 main.go:141] libmachine: (multinode-858631)   
	I0224 01:00:08.322885   21922 main.go:141] libmachine: (multinode-858631)   </cpu>
	I0224 01:00:08.322912   21922 main.go:141] libmachine: (multinode-858631)   <os>
	I0224 01:00:08.322936   21922 main.go:141] libmachine: (multinode-858631)     <type>hvm</type>
	I0224 01:00:08.322955   21922 main.go:141] libmachine: (multinode-858631)     <boot dev='cdrom'/>
	I0224 01:00:08.322966   21922 main.go:141] libmachine: (multinode-858631)     <boot dev='hd'/>
	I0224 01:00:08.322989   21922 main.go:141] libmachine: (multinode-858631)     <bootmenu enable='no'/>
	I0224 01:00:08.323001   21922 main.go:141] libmachine: (multinode-858631)   </os>
	I0224 01:00:08.323011   21922 main.go:141] libmachine: (multinode-858631)   <devices>
	I0224 01:00:08.323027   21922 main.go:141] libmachine: (multinode-858631)     <disk type='file' device='cdrom'>
	I0224 01:00:08.323106   21922 main.go:141] libmachine: (multinode-858631)       <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/boot2docker.iso'/>
	I0224 01:00:08.323139   21922 main.go:141] libmachine: (multinode-858631)       <target dev='hdc' bus='scsi'/>
	I0224 01:00:08.323157   21922 main.go:141] libmachine: (multinode-858631)       <readonly/>
	I0224 01:00:08.323170   21922 main.go:141] libmachine: (multinode-858631)     </disk>
	I0224 01:00:08.323187   21922 main.go:141] libmachine: (multinode-858631)     <disk type='file' device='disk'>
	I0224 01:00:08.323203   21922 main.go:141] libmachine: (multinode-858631)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 01:00:08.323221   21922 main.go:141] libmachine: (multinode-858631)       <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/multinode-858631.rawdisk'/>
	I0224 01:00:08.323237   21922 main.go:141] libmachine: (multinode-858631)       <target dev='hda' bus='virtio'/>
	I0224 01:00:08.323250   21922 main.go:141] libmachine: (multinode-858631)     </disk>
	I0224 01:00:08.323265   21922 main.go:141] libmachine: (multinode-858631)     <interface type='network'>
	I0224 01:00:08.323279   21922 main.go:141] libmachine: (multinode-858631)       <source network='mk-multinode-858631'/>
	I0224 01:00:08.323292   21922 main.go:141] libmachine: (multinode-858631)       <model type='virtio'/>
	I0224 01:00:08.323307   21922 main.go:141] libmachine: (multinode-858631)     </interface>
	I0224 01:00:08.323320   21922 main.go:141] libmachine: (multinode-858631)     <interface type='network'>
	I0224 01:00:08.323334   21922 main.go:141] libmachine: (multinode-858631)       <source network='default'/>
	I0224 01:00:08.323345   21922 main.go:141] libmachine: (multinode-858631)       <model type='virtio'/>
	I0224 01:00:08.323359   21922 main.go:141] libmachine: (multinode-858631)     </interface>
	I0224 01:00:08.323371   21922 main.go:141] libmachine: (multinode-858631)     <serial type='pty'>
	I0224 01:00:08.323413   21922 main.go:141] libmachine: (multinode-858631)       <target port='0'/>
	I0224 01:00:08.323438   21922 main.go:141] libmachine: (multinode-858631)     </serial>
	I0224 01:00:08.323453   21922 main.go:141] libmachine: (multinode-858631)     <console type='pty'>
	I0224 01:00:08.323467   21922 main.go:141] libmachine: (multinode-858631)       <target type='serial' port='0'/>
	I0224 01:00:08.323481   21922 main.go:141] libmachine: (multinode-858631)     </console>
	I0224 01:00:08.323493   21922 main.go:141] libmachine: (multinode-858631)     <rng model='virtio'>
	I0224 01:00:08.323506   21922 main.go:141] libmachine: (multinode-858631)       <backend model='random'>/dev/random</backend>
	I0224 01:00:08.323516   21922 main.go:141] libmachine: (multinode-858631)     </rng>
	I0224 01:00:08.323522   21922 main.go:141] libmachine: (multinode-858631)     
	I0224 01:00:08.323529   21922 main.go:141] libmachine: (multinode-858631)     
	I0224 01:00:08.323539   21922 main.go:141] libmachine: (multinode-858631)   </devices>
	I0224 01:00:08.323547   21922 main.go:141] libmachine: (multinode-858631) </domain>
	I0224 01:00:08.323554   21922 main.go:141] libmachine: (multinode-858631) 
	I0224 01:00:08.327792   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:ea:23:eb in network default
	I0224 01:00:08.328394   21922 main.go:141] libmachine: (multinode-858631) Ensuring networks are active...
	I0224 01:00:08.328414   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:08.329011   21922 main.go:141] libmachine: (multinode-858631) Ensuring network default is active
	I0224 01:00:08.329257   21922 main.go:141] libmachine: (multinode-858631) Ensuring network mk-multinode-858631 is active
	I0224 01:00:08.329759   21922 main.go:141] libmachine: (multinode-858631) Getting domain xml...
	I0224 01:00:08.330421   21922 main.go:141] libmachine: (multinode-858631) Creating domain...
	I0224 01:00:09.542870   21922 main.go:141] libmachine: (multinode-858631) Waiting to get IP...
	I0224 01:00:09.543702   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:09.544117   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:09.544177   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:09.544117   21944 retry.go:31] will retry after 287.452956ms: waiting for machine to come up
	I0224 01:00:09.833774   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:09.834253   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:09.834281   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:09.834215   21944 retry.go:31] will retry after 273.07846ms: waiting for machine to come up
	I0224 01:00:10.108537   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:10.108935   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:10.108967   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:10.108868   21944 retry.go:31] will retry after 375.690347ms: waiting for machine to come up
	I0224 01:00:10.486312   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:10.486717   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:10.486744   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:10.486664   21944 retry.go:31] will retry after 536.69253ms: waiting for machine to come up
	I0224 01:00:11.025320   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:11.025808   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:11.025857   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:11.025757   21944 retry.go:31] will retry after 478.181904ms: waiting for machine to come up
	I0224 01:00:11.505306   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:11.505791   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:11.505831   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:11.505730   21944 retry.go:31] will retry after 832.674291ms: waiting for machine to come up
	I0224 01:00:12.339590   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:12.339985   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:12.340008   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:12.339946   21944 retry.go:31] will retry after 979.085118ms: waiting for machine to come up
	I0224 01:00:13.320588   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:13.320998   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:13.321025   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:13.320951   21944 retry.go:31] will retry after 1.324498058s: waiting for machine to come up
	I0224 01:00:14.647576   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:14.648036   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:14.648065   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:14.647998   21944 retry.go:31] will retry after 1.26767628s: waiting for machine to come up
	I0224 01:00:15.916908   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:15.917321   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:15.917351   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:15.917277   21944 retry.go:31] will retry after 2.091389937s: waiting for machine to come up
	I0224 01:00:18.010032   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:18.010458   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:18.010496   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:18.010410   21944 retry.go:31] will retry after 2.648687931s: waiting for machine to come up
	I0224 01:00:20.662372   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:20.662826   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:20.662889   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:20.662777   21944 retry.go:31] will retry after 2.698111279s: waiting for machine to come up
	I0224 01:00:23.362043   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:23.362471   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:23.362500   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:23.362421   21944 retry.go:31] will retry after 3.027915498s: waiting for machine to come up
	I0224 01:00:26.391429   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:26.391846   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find current IP address of domain multinode-858631 in network mk-multinode-858631
	I0224 01:00:26.391874   21922 main.go:141] libmachine: (multinode-858631) DBG | I0224 01:00:26.391803   21944 retry.go:31] will retry after 3.726786776s: waiting for machine to come up
	I0224 01:00:30.121111   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.121498   21922 main.go:141] libmachine: (multinode-858631) Found IP for machine: 192.168.39.217
	I0224 01:00:30.121529   21922 main.go:141] libmachine: (multinode-858631) Reserving static IP address...
	I0224 01:00:30.121545   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has current primary IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.121936   21922 main.go:141] libmachine: (multinode-858631) DBG | unable to find host DHCP lease matching {name: "multinode-858631", mac: "52:54:00:96:ba:53", ip: "192.168.39.217"} in network mk-multinode-858631
	I0224 01:00:30.190556   21922 main.go:141] libmachine: (multinode-858631) DBG | Getting to WaitForSSH function...
	I0224 01:00:30.190591   21922 main.go:141] libmachine: (multinode-858631) Reserved static IP address: 192.168.39.217
	I0224 01:00:30.190604   21922 main.go:141] libmachine: (multinode-858631) Waiting for SSH to be available...
	I0224 01:00:30.192817   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.193150   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.193180   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.193311   21922 main.go:141] libmachine: (multinode-858631) DBG | Using SSH client type: external
	I0224 01:00:30.193350   21922 main.go:141] libmachine: (multinode-858631) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa (-rw-------)
	I0224 01:00:30.193384   21922 main.go:141] libmachine: (multinode-858631) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:00:30.193401   21922 main.go:141] libmachine: (multinode-858631) DBG | About to run SSH command:
	I0224 01:00:30.193436   21922 main.go:141] libmachine: (multinode-858631) DBG | exit 0
	I0224 01:00:30.288864   21922 main.go:141] libmachine: (multinode-858631) DBG | SSH cmd err, output: <nil>: 
	I0224 01:00:30.289137   21922 main.go:141] libmachine: (multinode-858631) KVM machine creation complete!
	I0224 01:00:30.289430   21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
	I0224 01:00:30.289976   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:30.290153   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:30.290321   21922 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 01:00:30.290336   21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:00:30.291526   21922 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 01:00:30.291540   21922 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 01:00:30.291545   21922 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 01:00:30.291551   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.293860   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.294219   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.294249   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.294386   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:30.294544   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.294672   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.294802   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:30.294950   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:30.295359   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:30.295371   21922 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 01:00:30.420171   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:00:30.420190   21922 main.go:141] libmachine: Detecting the provisioner...
	I0224 01:00:30.420197   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.422737   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.423108   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.423136   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.423267   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:30.423458   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.423597   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.423739   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:30.423884   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:30.424275   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:30.424287   21922 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 01:00:30.549909   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g41e8300-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0224 01:00:30.549975   21922 main.go:141] libmachine: found compatible host: buildroot
	I0224 01:00:30.549989   21922 main.go:141] libmachine: Provisioning with buildroot...
	I0224 01:00:30.550000   21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
	I0224 01:00:30.550269   21922 buildroot.go:166] provisioning hostname "multinode-858631"
	I0224 01:00:30.550293   21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
	I0224 01:00:30.550475   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.552822   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.553097   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.553124   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.553249   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:30.553417   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.553588   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.553701   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:30.553839   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:30.554225   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:30.554239   21922 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-858631 && echo "multinode-858631" | sudo tee /etc/hostname
	I0224 01:00:30.693960   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631
	
	I0224 01:00:30.693988   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.696773   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.697120   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.697152   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.697330   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:30.697511   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.697665   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.697809   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:30.697941   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:30.698385   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:30.698405   21922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-858631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-858631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:00:30.833828   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:00:30.833864   21922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:00:30.833907   21922 buildroot.go:174] setting up certificates
	I0224 01:00:30.833922   21922 provision.go:83] configureAuth start
	I0224 01:00:30.833940   21922 main.go:141] libmachine: (multinode-858631) Calling .GetMachineName
	I0224 01:00:30.834224   21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
	I0224 01:00:30.836812   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.837162   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.837191   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.837314   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.839548   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.839897   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.839920   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.839995   21922 provision.go:138] copyHostCerts
	I0224 01:00:30.840033   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:00:30.840074   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:00:30.840085   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:00:30.840143   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:00:30.840230   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:00:30.840250   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:00:30.840258   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:00:30.840284   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:00:30.840357   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:00:30.840377   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:00:30.840382   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:00:30.840402   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:00:30.840450   21922 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube multinode-858631]
	I0224 01:00:30.983124   21922 provision.go:172] copyRemoteCerts
	I0224 01:00:30.983169   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:00:30.983190   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:30.985644   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.985953   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:30.985982   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:30.986131   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:30.986312   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:30.986474   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:30.986605   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:00:31.081588   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 01:00:31.081671   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 01:00:31.103684   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 01:00:31.103738   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 01:00:31.125916   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 01:00:31.125976   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:00:31.148440   21922 provision.go:86] duration metric: configureAuth took 314.504412ms
	I0224 01:00:31.148459   21922 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:00:31.148629   21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:00:31.148652   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:31.148893   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:31.151098   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.151447   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:31.151474   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.151613   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:31.151787   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.151961   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.152107   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:31.152279   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:31.152719   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:31.152733   21922 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:00:31.283355   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:00:31.283381   21922 buildroot.go:70] root file system type: tmpfs
	I0224 01:00:31.283501   21922 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:00:31.283530   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:31.286213   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.286507   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:31.286526   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.286697   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:31.286883   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.287047   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.287198   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:31.287357   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:31.287788   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:31.287859   21922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:00:31.425437   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:00:31.425499   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:31.427865   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.428169   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:31.428192   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:31.428349   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:31.428522   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.428655   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:31.428784   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:31.428921   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:31.429304   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:31.429322   21922 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:00:32.179405   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0224 01:00:32.179427   21922 main.go:141] libmachine: Checking connection to Docker...
	I0224 01:00:32.179435   21922 main.go:141] libmachine: (multinode-858631) Calling .GetURL
	I0224 01:00:32.180653   21922 main.go:141] libmachine: (multinode-858631) DBG | Using libvirt version 6000000
	I0224 01:00:32.183228   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.183562   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.183591   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.183746   21922 main.go:141] libmachine: Docker is up and running!
	I0224 01:00:32.183761   21922 main.go:141] libmachine: Reticulating splines...
	I0224 01:00:32.183769   21922 client.go:171] LocalClient.Create took 24.173132801s
	I0224 01:00:32.183791   21922 start.go:167] duration metric: libmachine.API.Create for "multinode-858631" took 24.173190525s
	I0224 01:00:32.183802   21922 start.go:300] post-start starting for "multinode-858631" (driver="kvm2")
	I0224 01:00:32.183807   21922 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:00:32.183827   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:32.184063   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:00:32.184087   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:32.186279   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.186573   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.186604   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.186732   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:32.186952   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:32.187107   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:32.187244   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:00:32.278374   21922 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:00:32.282760   21922 command_runner.go:130] > NAME=Buildroot
	I0224 01:00:32.282783   21922 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0224 01:00:32.282788   21922 command_runner.go:130] > ID=buildroot
	I0224 01:00:32.282793   21922 command_runner.go:130] > VERSION_ID=2021.02.12
	I0224 01:00:32.282798   21922 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0224 01:00:32.282828   21922 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:00:32.282843   21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:00:32.282918   21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:00:32.283013   21922 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:00:32.283026   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /etc/ssl/certs/111312.pem
	I0224 01:00:32.283100   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:00:32.291259   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:00:32.314029   21922 start.go:303] post-start completed in 130.213564ms
	I0224 01:00:32.314071   21922 main.go:141] libmachine: (multinode-858631) Calling .GetConfigRaw
	I0224 01:00:32.314587   21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
	I0224 01:00:32.316743   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.317060   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.317090   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.317279   21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:00:32.317451   21922 start.go:128] duration metric: createHost completed in 24.325103397s
	I0224 01:00:32.317493   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:32.319467   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.319758   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.319785   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.319927   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:32.320117   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:32.320246   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:32.320378   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:32.320517   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:00:32.320897   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0224 01:00:32.320908   21922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0224 01:00:32.449766   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200432.433992026
	
	I0224 01:00:32.449792   21922 fix.go:207] guest clock: 1677200432.433992026
	I0224 01:00:32.449804   21922 fix.go:220] Guest: 2023-02-24 01:00:32.433992026 +0000 UTC Remote: 2023-02-24 01:00:32.317464505 +0000 UTC m=+24.432912270 (delta=116.527521ms)
	I0224 01:00:32.449830   21922 fix.go:191] guest clock delta is within tolerance: 116.527521ms
	I0224 01:00:32.449837   21922 start.go:83] releasing machines lock for "multinode-858631", held for 24.457561476s
	I0224 01:00:32.449860   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:32.450137   21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
	I0224 01:00:32.452532   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.452856   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.452895   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.453048   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:32.453653   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:32.453804   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:00:32.453885   21922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:00:32.453924   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:32.453963   21922 ssh_runner.go:195] Run: cat /version.json
	I0224 01:00:32.453982   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:00:32.457509   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.457610   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.457892   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.457947   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:32.457971   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.457989   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:32.458118   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:32.458210   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:00:32.458343   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:32.458412   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:00:32.458495   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:32.458562   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:00:32.458629   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:00:32.458686   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:00:32.566472   21922 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 01:00:32.567132   21922 command_runner.go:130] > {"iso_version": "v1.29.0-1676568791-15849", "kicbase_version": "v0.0.37-1675980448-15752", "minikube_version": "v1.29.0", "commit": "cf7ad99382c4b89a2ffa286b1101797332265ce3"}
	I0224 01:00:32.567237   21922 ssh_runner.go:195] Run: systemctl --version
	I0224 01:00:32.572390   21922 command_runner.go:130] > systemd 247 (247)
	I0224 01:00:32.572410   21922 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0224 01:00:32.572692   21922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 01:00:32.577614   21922 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0224 01:00:32.577810   21922 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:00:32.577861   21922 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:00:32.581074   21922 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 01:00:32.581156   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:00:32.589028   21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:00:32.604136   21922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:00:32.618439   21922 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0224 01:00:32.618464   21922 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 01:00:32.618473   21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:00:32.618552   21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:00:32.645414   21922 docker.go:630] Got preloaded images: 
	I0224 01:00:32.645434   21922 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0224 01:00:32.645487   21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:00:32.653967   21922 command_runner.go:139] > {"Repositories":{}}
	I0224 01:00:32.654083   21922 ssh_runner.go:195] Run: which lz4
	I0224 01:00:32.657537   21922 command_runner.go:130] > /usr/bin/lz4
	I0224 01:00:32.657561   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0224 01:00:32.657623   21922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0224 01:00:32.661229   21922 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:00:32.661457   21922 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:00:32.661489   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0224 01:00:34.312007   21922 docker.go:594] Took 1.654399 seconds to copy over tarball
	I0224 01:00:34.312064   21922 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 01:00:36.924173   21922 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.612079648s)
	I0224 01:00:36.924227   21922 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 01:00:36.960913   21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:00:36.970095   21922 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0224 01:00:36.970254   21922 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0224 01:00:36.987660   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:00:37.096321   21922 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:00:40.441534   21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.345178102s)
	I0224 01:00:40.441576   21922 start.go:485] detecting cgroup driver to use...
	I0224 01:00:40.441724   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:00:40.458990   21922 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 01:00:40.459016   21922 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 01:00:40.459076   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:00:40.469201   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:00:40.479358   21922 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:00:40.479427   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:00:40.488595   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:00:40.497890   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:00:40.506619   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:00:40.515720   21922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:00:40.524910   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:00:40.533681   21922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:00:40.541806   21922 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 01:00:40.541876   21922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:00:40.549793   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:00:40.652205   21922 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:00:40.668651   21922 start.go:485] detecting cgroup driver to use...
	I0224 01:00:40.668764   21922 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:00:40.682477   21922 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0224 01:00:40.682492   21922 command_runner.go:130] > [Unit]
	I0224 01:00:40.682498   21922 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 01:00:40.682510   21922 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 01:00:40.682522   21922 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0224 01:00:40.682529   21922 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0224 01:00:40.682541   21922 command_runner.go:130] > StartLimitBurst=3
	I0224 01:00:40.682549   21922 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 01:00:40.682557   21922 command_runner.go:130] > [Service]
	I0224 01:00:40.682560   21922 command_runner.go:130] > Type=notify
	I0224 01:00:40.682565   21922 command_runner.go:130] > Restart=on-failure
	I0224 01:00:40.682572   21922 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 01:00:40.682587   21922 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 01:00:40.682596   21922 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 01:00:40.682602   21922 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 01:00:40.682610   21922 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 01:00:40.682621   21922 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 01:00:40.682633   21922 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 01:00:40.682654   21922 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 01:00:40.682667   21922 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 01:00:40.682674   21922 command_runner.go:130] > ExecStart=
	I0224 01:00:40.682698   21922 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0224 01:00:40.682709   21922 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 01:00:40.682724   21922 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 01:00:40.682736   21922 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 01:00:40.682745   21922 command_runner.go:130] > LimitNOFILE=infinity
	I0224 01:00:40.682750   21922 command_runner.go:130] > LimitNPROC=infinity
	I0224 01:00:40.682754   21922 command_runner.go:130] > LimitCORE=infinity
	I0224 01:00:40.682762   21922 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 01:00:40.682771   21922 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 01:00:40.682778   21922 command_runner.go:130] > TasksMax=infinity
	I0224 01:00:40.682782   21922 command_runner.go:130] > TimeoutStartSec=0
	I0224 01:00:40.682791   21922 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 01:00:40.682796   21922 command_runner.go:130] > Delegate=yes
	I0224 01:00:40.682802   21922 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 01:00:40.682808   21922 command_runner.go:130] > KillMode=process
	I0224 01:00:40.682815   21922 command_runner.go:130] > [Install]
	I0224 01:00:40.682828   21922 command_runner.go:130] > WantedBy=multi-user.target
	I0224 01:00:40.682877   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:00:40.696103   21922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:00:40.713056   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:00:40.725487   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:00:40.737288   21922 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0224 01:00:40.765853   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:00:40.778107   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:00:40.794238   21922 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 01:00:40.794265   21922 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 01:00:40.794554   21922 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:00:40.894905   21922 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:00:40.996621   21922 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:00:40.996653   21922 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:00:41.012805   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:00:41.111187   21922 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:00:42.458510   21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.347288974s)
	I0224 01:00:42.458577   21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:00:42.560401   21922 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:00:42.658143   21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:00:42.765584   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:00:42.868302   21922 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:00:42.885199   21922 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 01:00:42.885268   21922 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 01:00:42.891095   21922 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 01:00:42.891112   21922 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 01:00:42.891118   21922 command_runner.go:130] > Device: 16h/22d	Inode: 898         Links: 1
	I0224 01:00:42.891127   21922 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0224 01:00:42.891135   21922 command_runner.go:130] > Access: 2023-02-24 01:00:42.874787956 +0000
	I0224 01:00:42.891143   21922 command_runner.go:130] > Modify: 2023-02-24 01:00:42.874787956 +0000
	I0224 01:00:42.891158   21922 command_runner.go:130] > Change: 2023-02-24 01:00:42.877789446 +0000
	I0224 01:00:42.891164   21922 command_runner.go:130] >  Birth: -
	I0224 01:00:42.891189   21922 start.go:553] Will wait 60s for crictl version
	I0224 01:00:42.891244   21922 ssh_runner.go:195] Run: which crictl
	I0224 01:00:42.894967   21922 command_runner.go:130] > /usr/bin/crictl
	I0224 01:00:42.895105   21922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 01:00:42.999570   21922 command_runner.go:130] > Version:  0.1.0
	I0224 01:00:42.999599   21922 command_runner.go:130] > RuntimeName:  docker
	I0224 01:00:42.999608   21922 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0224 01:00:42.999617   21922 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 01:00:42.999648   21922 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0224 01:00:42.999707   21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:00:43.031033   21922 command_runner.go:130] > 20.10.23
	I0224 01:00:43.031201   21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:00:43.062577   21922 command_runner.go:130] > 20.10.23
	I0224 01:00:43.188270   21922 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0224 01:00:43.188370   21922 main.go:141] libmachine: (multinode-858631) Calling .GetIP
	I0224 01:00:43.190950   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:43.191349   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:00:43.191386   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:00:43.191576   21922 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 01:00:43.196168   21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:00:43.208448   21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:00:43.208505   21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:00:43.235549   21922 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 01:00:43.235573   21922 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 01:00:43.235581   21922 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 01:00:43.235589   21922 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 01:00:43.235597   21922 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 01:00:43.235604   21922 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 01:00:43.235612   21922 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 01:00:43.235620   21922 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 01:00:43.236734   21922 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:00:43.236755   21922 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:00:43.236808   21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:00:43.259789   21922 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 01:00:43.259813   21922 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 01:00:43.259819   21922 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 01:00:43.259825   21922 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 01:00:43.259829   21922 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 01:00:43.259837   21922 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 01:00:43.259842   21922 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 01:00:43.259854   21922 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 01:00:43.260930   21922 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:00:43.260945   21922 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:00:43.260996   21922 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 01:00:43.291394   21922 command_runner.go:130] > cgroupfs
	I0224 01:00:43.292593   21922 cni.go:84] Creating CNI manager for ""
	I0224 01:00:43.292611   21922 cni.go:136] 1 nodes found, recommending kindnet
	I0224 01:00:43.292629   21922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 01:00:43.292649   21922 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-858631 NodeName:multinode-858631 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 01:00:43.292803   21922 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-858631"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 01:00:43.292905   21922 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-858631 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 01:00:43.292960   21922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 01:00:43.302486   21922 command_runner.go:130] > kubeadm
	I0224 01:00:43.302511   21922 command_runner.go:130] > kubectl
	I0224 01:00:43.302515   21922 command_runner.go:130] > kubelet
	I0224 01:00:43.302531   21922 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 01:00:43.302568   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 01:00:43.311742   21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0224 01:00:43.327962   21922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 01:00:43.343617   21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0224 01:00:43.359576   21922 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0224 01:00:43.363235   21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:00:43.374564   21922 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631 for IP: 192.168.39.217
	I0224 01:00:43.374588   21922 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:43.374731   21922 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
	I0224 01:00:43.374772   21922 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
	I0224 01:00:43.374825   21922 certs.go:315] generating minikube-user signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key
	I0224 01:00:43.374838   21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt with IP's: []
	I0224 01:00:43.757434   21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt ...
	I0224 01:00:43.757461   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt: {Name:mkcc0c569c9788541aab5f3223cd2b7951674618 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:43.757640   21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key ...
	I0224 01:00:43.757650   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key: {Name:mk0ed358ba22663cd96c2d3cd2869c3a20fbda2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:43.757728   21922 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f
	I0224 01:00:43.757741   21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 01:00:43.973440   21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f ...
	I0224 01:00:43.973474   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f: {Name:mk00cd72cfae969b641b12281c1312aa0fbdbefe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:43.973627   21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f ...
	I0224 01:00:43.973637   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f: {Name:mkbc17d769c4cddfc5578c3ea30c376f66ff2a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:43.973703   21922 certs.go:333] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt
	I0224 01:00:43.973777   21922 certs.go:337] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key.891f873f -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key
	I0224 01:00:43.973824   21922 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key
	I0224 01:00:43.973834   21922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt with IP's: []
	I0224 01:00:44.094334   21922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt ...
	I0224 01:00:44.094360   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt: {Name:mk8988f1556cd909013dfd0d62c0a8c3e8199ad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:44.094504   21922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key ...
	I0224 01:00:44.094514   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key: {Name:mk1be02ef2a6514ffd86117be55d5b107c276723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:00:44.094579   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 01:00:44.094594   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 01:00:44.094610   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 01:00:44.094622   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 01:00:44.094635   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 01:00:44.094647   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 01:00:44.094658   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 01:00:44.094671   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 01:00:44.094734   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
	W0224 01:00:44.094769   21922 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
	I0224 01:00:44.094778   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 01:00:44.094802   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
	I0224 01:00:44.094825   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
	I0224 01:00:44.094846   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
	I0224 01:00:44.094884   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:00:44.094908   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem -> /usr/share/ca-certificates/11131.pem
	I0224 01:00:44.094921   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /usr/share/ca-certificates/111312.pem
	I0224 01:00:44.094933   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:00:44.095423   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 01:00:44.119456   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 01:00:44.141438   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 01:00:44.163472   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 01:00:44.184474   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 01:00:44.205536   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 01:00:44.226889   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 01:00:44.248308   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 01:00:44.269339   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
	I0224 01:00:44.290313   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
	I0224 01:00:44.311391   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 01:00:44.333369   21922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 01:00:44.349242   21922 ssh_runner.go:195] Run: openssl version
	I0224 01:00:44.354342   21922 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0224 01:00:44.354617   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
	I0224 01:00:44.364312   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
	I0224 01:00:44.368722   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:00:44.368859   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:00:44.368908   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
	I0224 01:00:44.374353   21922 command_runner.go:130] > 51391683
	I0224 01:00:44.374420   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
	I0224 01:00:44.384479   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
	I0224 01:00:44.394484   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
	I0224 01:00:44.398697   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:00:44.398904   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:00:44.398946   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
	I0224 01:00:44.404377   21922 command_runner.go:130] > 3ec20f2e
	I0224 01:00:44.404422   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 01:00:44.414437   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 01:00:44.424443   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:00:44.428582   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:00:44.428754   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:00:44.428790   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:00:44.434050   21922 command_runner.go:130] > b5213941
	I0224 01:00:44.434108   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 01:00:44.444029   21922 kubeadm.go:401] StartCluster: {Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:00:44.444176   21922 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:00:44.468715   21922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 01:00:44.477644   21922 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0224 01:00:44.477667   21922 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0224 01:00:44.477674   21922 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0224 01:00:44.477727   21922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 01:00:44.486752   21922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 01:00:44.495237   21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0224 01:00:44.495257   21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0224 01:00:44.495271   21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0224 01:00:44.495282   21922 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 01:00:44.495448   21922 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 01:00:44.495475   21922 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 01:00:44.590193   21922 kubeadm.go:322] W0224 01:00:44.584870    1313 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 01:00:44.590215   21922 command_runner.go:130] ! W0224 01:00:44.584870    1313 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 01:00:44.841325   21922 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 01:00:44.841349   21922 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 01:00:59.743641   21922 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 01:00:59.743665   21922 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0224 01:00:59.743714   21922 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 01:00:59.743745   21922 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 01:00:59.743871   21922 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 01:00:59.743885   21922 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 01:00:59.744009   21922 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 01:00:59.744023   21922 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 01:00:59.744130   21922 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 01:00:59.744137   21922 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 01:00:59.744189   21922 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 01:00:59.746047   21922 out.go:204]   - Generating certificates and keys ...
	I0224 01:00:59.744264   21922 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 01:00:59.746123   21922 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 01:00:59.746138   21922 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0224 01:00:59.746201   21922 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 01:00:59.746212   21922 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0224 01:00:59.746316   21922 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 01:00:59.746327   21922 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 01:00:59.746412   21922 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0224 01:00:59.746420   21922 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 01:00:59.746504   21922 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0224 01:00:59.746514   21922 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 01:00:59.746582   21922 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0224 01:00:59.746595   21922 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 01:00:59.746661   21922 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0224 01:00:59.746667   21922 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 01:00:59.746803   21922 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0224 01:00:59.746813   21922 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0224 01:00:59.746884   21922 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0224 01:00:59.746888   21922 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 01:00:59.747037   21922 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0224 01:00:59.747049   21922 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-858631] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0224 01:00:59.747109   21922 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 01:00:59.747115   21922 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 01:00:59.747186   21922 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 01:00:59.747197   21922 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 01:00:59.747264   21922 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0224 01:00:59.747273   21922 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 01:00:59.747342   21922 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 01:00:59.747350   21922 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 01:00:59.747415   21922 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 01:00:59.747422   21922 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 01:00:59.747474   21922 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 01:00:59.747480   21922 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 01:00:59.747552   21922 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 01:00:59.747559   21922 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 01:00:59.747620   21922 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 01:00:59.747627   21922 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 01:00:59.747749   21922 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 01:00:59.747758   21922 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 01:00:59.747870   21922 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 01:00:59.747879   21922 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 01:00:59.747924   21922 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 01:00:59.747931   21922 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 01:00:59.748009   21922 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 01:00:59.748018   21922 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 01:00:59.750647   21922 out.go:204]   - Booting up control plane ...
	I0224 01:00:59.750742   21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 01:00:59.750755   21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 01:00:59.750830   21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 01:00:59.750845   21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 01:00:59.750912   21922 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 01:00:59.750924   21922 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 01:00:59.751017   21922 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 01:00:59.751029   21922 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 01:00:59.751189   21922 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 01:00:59.751197   21922 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 01:00:59.751292   21922 command_runner.go:130] > [apiclient] All control plane components are healthy after 10.503870 seconds
	I0224 01:00:59.751300   21922 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503870 seconds
	I0224 01:00:59.751411   21922 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 01:00:59.751419   21922 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 01:00:59.751574   21922 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 01:00:59.751583   21922 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 01:00:59.751629   21922 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0224 01:00:59.751634   21922 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 01:00:59.751784   21922 command_runner.go:130] > [mark-control-plane] Marking the node multinode-858631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 01:00:59.751790   21922 kubeadm.go:322] [mark-control-plane] Marking the node multinode-858631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 01:00:59.751839   21922 command_runner.go:130] > [bootstrap-token] Using token: wc0vru.55w1txftddrsz4y0
	I0224 01:00:59.751845   21922 kubeadm.go:322] [bootstrap-token] Using token: wc0vru.55w1txftddrsz4y0
	I0224 01:00:59.753292   21922 out.go:204]   - Configuring RBAC rules ...
	I0224 01:00:59.753399   21922 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 01:00:59.753412   21922 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 01:00:59.753524   21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 01:00:59.753532   21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 01:00:59.753647   21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 01:00:59.753655   21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 01:00:59.753754   21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 01:00:59.753758   21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 01:00:59.753852   21922 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 01:00:59.753855   21922 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 01:00:59.753922   21922 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 01:00:59.753928   21922 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 01:00:59.754036   21922 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 01:00:59.754043   21922 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 01:00:59.754077   21922 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0224 01:00:59.754082   21922 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 01:00:59.754117   21922 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0224 01:00:59.754122   21922 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 01:00:59.754126   21922 kubeadm.go:322] 
	I0224 01:00:59.754174   21922 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0224 01:00:59.754180   21922 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 01:00:59.754183   21922 kubeadm.go:322] 
	I0224 01:00:59.754243   21922 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0224 01:00:59.754249   21922 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 01:00:59.754252   21922 kubeadm.go:322] 
	I0224 01:00:59.754272   21922 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0224 01:00:59.754278   21922 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 01:00:59.754358   21922 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 01:00:59.754371   21922 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 01:00:59.754441   21922 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 01:00:59.754450   21922 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 01:00:59.754456   21922 kubeadm.go:322] 
	I0224 01:00:59.754528   21922 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0224 01:00:59.754536   21922 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 01:00:59.754541   21922 kubeadm.go:322] 
	I0224 01:00:59.754620   21922 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 01:00:59.754629   21922 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 01:00:59.754634   21922 kubeadm.go:322] 
	I0224 01:00:59.754702   21922 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0224 01:00:59.754711   21922 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 01:00:59.754819   21922 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 01:00:59.754838   21922 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 01:00:59.754927   21922 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 01:00:59.754936   21922 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 01:00:59.754947   21922 kubeadm.go:322] 
	I0224 01:00:59.755050   21922 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0224 01:00:59.755057   21922 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 01:00:59.755161   21922 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0224 01:00:59.755173   21922 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 01:00:59.755179   21922 kubeadm.go:322] 
	I0224 01:00:59.755275   21922 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
	I0224 01:00:59.755286   21922 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
	I0224 01:00:59.755398   21922 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 \
	I0224 01:00:59.755408   21922 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 \
	I0224 01:00:59.755431   21922 command_runner.go:130] > 	--control-plane 
	I0224 01:00:59.755444   21922 kubeadm.go:322] 	--control-plane 
	I0224 01:00:59.755456   21922 kubeadm.go:322] 
	I0224 01:00:59.755555   21922 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0224 01:00:59.755563   21922 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 01:00:59.755568   21922 kubeadm.go:322] 
	I0224 01:00:59.755668   21922 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
	I0224 01:00:59.755676   21922 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wc0vru.55w1txftddrsz4y0 \
	I0224 01:00:59.755788   21922 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 
	I0224 01:00:59.755805   21922 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 
	I0224 01:00:59.755816   21922 cni.go:84] Creating CNI manager for ""
	I0224 01:00:59.755830   21922 cni.go:136] 1 nodes found, recommending kindnet
	I0224 01:00:59.757492   21922 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 01:00:59.758857   21922 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 01:00:59.769348   21922 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 01:00:59.769367   21922 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0224 01:00:59.769376   21922 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0224 01:00:59.769386   21922 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 01:00:59.769395   21922 command_runner.go:130] > Access: 2023-02-24 01:00:20.396182736 +0000
	I0224 01:00:59.769404   21922 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0224 01:00:59.769413   21922 command_runner.go:130] > Change: 2023-02-24 01:00:18.603182736 +0000
	I0224 01:00:59.769423   21922 command_runner.go:130] >  Birth: -
	I0224 01:00:59.769568   21922 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 01:00:59.769585   21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 01:00:59.810357   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 01:01:00.838101   21922 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0224 01:01:00.846024   21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0224 01:01:00.854961   21922 command_runner.go:130] > serviceaccount/kindnet created
	I0224 01:01:00.872715   21922 command_runner.go:130] > daemonset.apps/kindnet created
	I0224 01:01:00.876230   21922 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.065843677s)
	I0224 01:01:00.876272   21922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 01:01:00.876358   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:00.876406   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510 minikube.k8s.io/name=multinode-858631 minikube.k8s.io/updated_at=2023_02_24T01_01_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:00.902234   21922 command_runner.go:130] > -16
	I0224 01:01:00.902361   21922 ops.go:34] apiserver oom_adj: -16
	I0224 01:01:01.028438   21922 command_runner.go:130] > node/multinode-858631 labeled
	I0224 01:01:01.028488   21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0224 01:01:01.028594   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:01.108266   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:01.609302   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:01.699333   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:02.108806   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:02.198922   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:02.609703   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:02.697066   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:03.109297   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:03.216079   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:03.609289   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:03.700025   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:04.109222   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:04.209162   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:04.609636   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:04.697004   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:05.109567   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:05.225327   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:05.609539   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:05.698814   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:06.109440   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:06.206194   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:06.609075   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:06.696484   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:07.109227   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:07.204309   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:07.608808   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:07.689929   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:08.109675   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:08.217755   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:08.609397   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:08.684146   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:09.109564   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:09.286262   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:09.609698   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:09.710691   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:10.108903   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:10.216001   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:10.608840   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:10.708540   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:11.109253   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:11.253967   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:11.609554   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:11.719570   21922 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 01:01:12.109086   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 01:01:12.213727   21922 command_runner.go:130] > NAME      SECRETS   AGE
	I0224 01:01:12.213753   21922 command_runner.go:130] > default   0         1s
	I0224 01:01:12.215256   21922 kubeadm.go:1073] duration metric: took 11.338954657s to wait for elevateKubeSystemPrivileges.
	I0224 01:01:12.215285   21922 kubeadm.go:403] StartCluster complete in 27.771261829s
	I0224 01:01:12.215305   21922 settings.go:142] acquiring lock: {Name:mk174257a2297336a9e6f80080faa7ef819759a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:01:12.215390   21922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:01:12.216091   21922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/kubeconfig: {Name:mk7a14c2c6ccf91ba70e9a5ad74574ac5676cf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:01:12.216320   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 01:01:12.216450   21922 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 01:01:12.216511   21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:01:12.216544   21922 addons.go:65] Setting default-storageclass=true in profile "multinode-858631"
	I0224 01:01:12.216559   21922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-858631"
	I0224 01:01:12.216536   21922 addons.go:65] Setting storage-provisioner=true in profile "multinode-858631"
	I0224 01:01:12.216602   21922 addons.go:227] Setting addon storage-provisioner=true in "multinode-858631"
	I0224 01:01:12.216659   21922 host.go:66] Checking if "multinode-858631" exists ...
	I0224 01:01:12.216665   21922 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:01:12.216956   21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:01:12.217047   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:12.217053   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:12.217076   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:12.217078   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:12.217813   21922 cert_rotation.go:137] Starting client certificate rotation controller
	I0224 01:01:12.217937   21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 01:01:12.217951   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:12.217959   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:12.217966   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:12.232172   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33789
	I0224 01:01:12.232489   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0224 01:01:12.232557   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:12.232849   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:12.233027   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:12.233048   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:12.233285   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:12.233310   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:12.233391   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:12.233611   21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:01:12.233617   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:12.234165   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:12.234213   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:12.235121   21922 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0224 01:01:12.235141   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:12.235151   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:12.235160   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:12.235169   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:12.235178   21922 round_trippers.go:580]     Content-Length: 291
	I0224 01:01:12.235185   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:12 GMT
	I0224 01:01:12.235192   21922 round_trippers.go:580]     Audit-Id: 6f7763fc-fcae-4207-bdd2-f51554563a10
	I0224 01:01:12.235200   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:12.235227   21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"352","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 01:01:12.235598   21922 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"352","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 01:01:12.235635   21922 round_trippers.go:463] PUT https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 01:01:12.235638   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:12.235645   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:12.235651   21922 round_trippers.go:473]     Content-Type: application/json
	I0224 01:01:12.235657   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:12.235766   21922 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:01:12.236092   21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:01:12.236462   21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses
	I0224 01:01:12.236478   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:12.236490   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:12.236500   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:12.240657   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:01:12.240677   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:12.240687   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:12.240697   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:12.240709   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:12.240720   21922 round_trippers.go:580]     Content-Length: 109
	I0224 01:01:12.240732   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:12 GMT
	I0224 01:01:12.240744   21922 round_trippers.go:580]     Audit-Id: 86b2bafb-230d-4596-8c45-d078c1ca8038
	I0224 01:01:12.240757   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:12.240776   21922 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"352"},"items":[]}
	I0224 01:01:12.241022   21922 addons.go:227] Setting addon default-storageclass=true in "multinode-858631"
	I0224 01:01:12.241054   21922 host.go:66] Checking if "multinode-858631" exists ...
	I0224 01:01:12.241394   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:12.241435   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:12.248639   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0224 01:01:12.249093   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:12.249620   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:12.249638   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:12.249930   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:12.250122   21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:01:12.251892   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:01:12.254052   21922 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 01:01:12.252824   21922 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0224 01:01:12.255572   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:12.255586   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:12.255600   21922 round_trippers.go:580]     Content-Length: 291
	I0224 01:01:12.255610   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:12 GMT
	I0224 01:01:12.255622   21922 round_trippers.go:580]     Audit-Id: b1655ca1-a32d-41b1-aa84-2e1341e00c48
	I0224 01:01:12.255633   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:12.255644   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:12.255654   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:12.255684   21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"353","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0224 01:01:12.255699   21922 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 01:01:12.255718   21922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 01:01:12.255741   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:01:12.256724   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0224 01:01:12.257065   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:12.257563   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:12.257590   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:12.258021   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:12.258536   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:12.258577   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:12.259144   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:01:12.259618   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:01:12.259646   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:01:12.259793   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:01:12.259963   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:01:12.260106   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:01:12.260222   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:01:12.272835   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0224 01:01:12.273230   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:12.273705   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:12.273729   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:12.274061   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:12.274270   21922 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:01:12.275817   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:01:12.276041   21922 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 01:01:12.276055   21922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 01:01:12.276067   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:01:12.278828   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:01:12.279229   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:01:12.279256   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:01:12.279526   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:01:12.279695   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:01:12.279847   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:01:12.279958   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:01:12.493779   21922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 01:01:12.609491   21922 command_runner.go:130] > apiVersion: v1
	I0224 01:01:12.609510   21922 command_runner.go:130] > data:
	I0224 01:01:12.609514   21922 command_runner.go:130] >   Corefile: |
	I0224 01:01:12.609518   21922 command_runner.go:130] >     .:53 {
	I0224 01:01:12.609522   21922 command_runner.go:130] >         errors
	I0224 01:01:12.609526   21922 command_runner.go:130] >         health {
	I0224 01:01:12.609531   21922 command_runner.go:130] >            lameduck 5s
	I0224 01:01:12.609534   21922 command_runner.go:130] >         }
	I0224 01:01:12.609538   21922 command_runner.go:130] >         ready
	I0224 01:01:12.609544   21922 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0224 01:01:12.609548   21922 command_runner.go:130] >            pods insecure
	I0224 01:01:12.609553   21922 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0224 01:01:12.609564   21922 command_runner.go:130] >            ttl 30
	I0224 01:01:12.609568   21922 command_runner.go:130] >         }
	I0224 01:01:12.609576   21922 command_runner.go:130] >         prometheus :9153
	I0224 01:01:12.609581   21922 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0224 01:01:12.609588   21922 command_runner.go:130] >            max_concurrent 1000
	I0224 01:01:12.609591   21922 command_runner.go:130] >         }
	I0224 01:01:12.609596   21922 command_runner.go:130] >         cache 30
	I0224 01:01:12.609603   21922 command_runner.go:130] >         loop
	I0224 01:01:12.609606   21922 command_runner.go:130] >         reload
	I0224 01:01:12.609610   21922 command_runner.go:130] >         loadbalance
	I0224 01:01:12.609614   21922 command_runner.go:130] >     }
	I0224 01:01:12.609618   21922 command_runner.go:130] > kind: ConfigMap
	I0224 01:01:12.609622   21922 command_runner.go:130] > metadata:
	I0224 01:01:12.609630   21922 command_runner.go:130] >   creationTimestamp: "2023-02-24T01:00:59Z"
	I0224 01:01:12.609636   21922 command_runner.go:130] >   name: coredns
	I0224 01:01:12.609641   21922 command_runner.go:130] >   namespace: kube-system
	I0224 01:01:12.609646   21922 command_runner.go:130] >   resourceVersion: "237"
	I0224 01:01:12.609651   21922 command_runner.go:130] >   uid: 9d4033e4-3349-4156-a8a9-b90674355b37
	I0224 01:01:12.611617   21922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 01:01:12.636652   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 01:01:12.756223   21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 01:01:12.756241   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:12.756249   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:12.756255   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:12.759162   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:12.759177   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:12.759185   21922 round_trippers.go:580]     Audit-Id: 47bc6633-bcde-441e-aa15-cff68c986372
	I0224 01:01:12.759192   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:12.759200   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:12.759208   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:12.759220   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:12.759230   21922 round_trippers.go:580]     Content-Length: 291
	I0224 01:01:12.759241   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:12 GMT
	I0224 01:01:12.759262   21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"363","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 01:01:12.759400   21922 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-858631" context rescaled to 1 replicas
	I0224 01:01:12.759428   21922 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 01:01:12.761111   21922 out.go:177] * Verifying Kubernetes components...
	I0224 01:01:12.763028   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:01:13.433404   21922 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0224 01:01:13.435510   21922 main.go:141] libmachine: Making call to close driver server
	I0224 01:01:13.435529   21922 main.go:141] libmachine: (multinode-858631) Calling .Close
	I0224 01:01:13.435849   21922 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:01:13.435865   21922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:01:13.435875   21922 main.go:141] libmachine: Making call to close driver server
	I0224 01:01:13.435883   21922 main.go:141] libmachine: (multinode-858631) Calling .Close
	I0224 01:01:13.436083   21922 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:01:13.436108   21922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:01:13.436121   21922 main.go:141] libmachine: Making call to close driver server
	I0224 01:01:13.436136   21922 main.go:141] libmachine: (multinode-858631) Calling .Close
	I0224 01:01:13.436151   21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
	I0224 01:01:13.436358   21922 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:01:13.436373   21922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:01:13.436359   21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
	I0224 01:01:13.543744   21922 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0224 01:01:13.543773   21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0224 01:01:13.543783   21922 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 01:01:13.543799   21922 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 01:01:13.543808   21922 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0224 01:01:13.543817   21922 command_runner.go:130] > pod/storage-provisioner created
	I0224 01:01:13.543853   21922 main.go:141] libmachine: Making call to close driver server
	I0224 01:01:13.543870   21922 main.go:141] libmachine: (multinode-858631) Calling .Close
	I0224 01:01:13.544150   21922 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:01:13.544166   21922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:01:13.544176   21922 main.go:141] libmachine: Making call to close driver server
	I0224 01:01:13.544176   21922 main.go:141] libmachine: (multinode-858631) DBG | Closing plugin on server side
	I0224 01:01:13.544184   21922 main.go:141] libmachine: (multinode-858631) Calling .Close
	I0224 01:01:13.544532   21922 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:01:13.544545   21922 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:01:13.546148   21922 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0224 01:01:13.547278   21922 addons.go:492] enable addons completed in 1.330830289s: enabled=[default-storageclass storage-provisioner]
	I0224 01:01:13.577322   21922 command_runner.go:130] > configmap/coredns replaced
	I0224 01:01:13.580124   21922 start.go:921] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0224 01:01:13.580426   21922 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:01:13.580620   21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:01:13.580843   21922 node_ready.go:35] waiting up to 6m0s for node "multinode-858631" to be "Ready" ...
	I0224 01:01:13.580893   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:13.580900   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:13.580908   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:13.580917   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:13.583098   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:13.583114   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:13.583121   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:13 GMT
	I0224 01:01:13.583127   21922 round_trippers.go:580]     Audit-Id: 5d470667-6887-446a-bebf-6e3e2aea5567
	I0224 01:01:13.583135   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:13.583143   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:13.583151   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:13.583157   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:13.583285   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:14.084606   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:14.084630   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:14.084638   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:14.084644   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:14.087319   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:14.087339   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:14.087346   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:14.087352   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:14.087357   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:14.087363   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:14.087369   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:14 GMT
	I0224 01:01:14.087381   21922 round_trippers.go:580]     Audit-Id: 3e148909-924b-4e56-9530-4d72b3e00728
	I0224 01:01:14.088003   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:14.584763   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:14.584787   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:14.584795   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:14.584801   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:14.587112   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:14.587133   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:14.587140   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:14.587146   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:14 GMT
	I0224 01:01:14.587152   21922 round_trippers.go:580]     Audit-Id: 73fe93f4-f96c-45f9-a326-fa27738ab670
	I0224 01:01:14.587157   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:14.587169   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:14.587174   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:14.587324   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:15.084965   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:15.084991   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:15.085004   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:15.085014   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:15.087317   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:15.087335   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:15.087342   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:15.087348   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:15.087353   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:15.087358   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:15 GMT
	I0224 01:01:15.087370   21922 round_trippers.go:580]     Audit-Id: 7e26222f-1b9b-454f-90dd-f6fd98d9d7e0
	I0224 01:01:15.087378   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:15.087703   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:15.584161   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:15.584199   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:15.584207   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:15.584213   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:15.586675   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:15.586692   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:15.586699   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:15 GMT
	I0224 01:01:15.586711   21922 round_trippers.go:580]     Audit-Id: 1a17cd53-e8b0-414f-b592-0bd076c56659
	I0224 01:01:15.586722   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:15.586738   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:15.586750   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:15.586760   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:15.586870   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:15.587193   21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
	I0224 01:01:16.084499   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:16.084524   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:16.084540   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:16.084548   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:16.088844   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:01:16.088864   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:16.088871   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:16.088886   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:16 GMT
	I0224 01:01:16.088898   21922 round_trippers.go:580]     Audit-Id: 2a09160b-8c5c-43ec-914f-898c6fdcd59f
	I0224 01:01:16.088908   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:16.088917   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:16.088928   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:16.089187   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:16.584888   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:16.584915   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:16.584926   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:16.584934   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:16.587497   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:16.587517   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:16.587524   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:16.587535   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:16.587549   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:16 GMT
	I0224 01:01:16.587567   21922 round_trippers.go:580]     Audit-Id: e6ab8071-7f78-4427-892c-817dab6fea51
	I0224 01:01:16.587575   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:16.587584   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:16.587844   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:17.084561   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:17.084586   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:17.084598   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:17.084606   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:17.087457   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:17.087475   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:17.087481   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:17.087487   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:17.087493   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:17 GMT
	I0224 01:01:17.087504   21922 round_trippers.go:580]     Audit-Id: 3b897faa-02de-4667-aae4-379682378ba7
	I0224 01:01:17.087517   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:17.087526   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:17.087836   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:17.583949   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:17.583972   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:17.583980   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:17.583991   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:17.586623   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:17.586648   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:17.586657   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:17 GMT
	I0224 01:01:17.586667   21922 round_trippers.go:580]     Audit-Id: 8ddbc8e0-0198-417b-bbc6-bd40f1c2724c
	I0224 01:01:17.586676   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:17.586684   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:17.586693   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:17.586702   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:17.586834   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:18.084547   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:18.084576   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:18.084588   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:18.084598   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:18.087229   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:18.087252   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:18.087263   21922 round_trippers.go:580]     Audit-Id: cd6f5bc7-5d30-4584-8c61-ddc6f9c0549d
	I0224 01:01:18.087272   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:18.087280   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:18.087289   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:18.087301   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:18.087310   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:18 GMT
	I0224 01:01:18.087578   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:18.087889   21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
	I0224 01:01:18.584271   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:18.584299   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:18.584312   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:18.584323   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:18.587574   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:01:18.587592   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:18.587607   21922 round_trippers.go:580]     Audit-Id: fb62ee42-0041-4aff-b80f-d8d1eba05ee6
	I0224 01:01:18.587615   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:18.587623   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:18.587631   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:18.587640   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:18.587657   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:18 GMT
	I0224 01:01:18.588201   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:19.084943   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:19.084967   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:19.084979   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:19.084989   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:19.087624   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:19.087645   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:19.087655   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:19 GMT
	I0224 01:01:19.087664   21922 round_trippers.go:580]     Audit-Id: 53374b9c-4b52-4235-a921-a68a6393da76
	I0224 01:01:19.087671   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:19.087681   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:19.087694   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:19.087707   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:19.087991   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:19.584679   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:19.584700   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:19.584708   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:19.584715   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:19.587473   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:19.587492   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:19.587500   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:19.587506   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:19.587512   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:19 GMT
	I0224 01:01:19.587517   21922 round_trippers.go:580]     Audit-Id: 77c01102-6483-4c90-b2db-304243fc4bbb
	I0224 01:01:19.587523   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:19.587528   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:19.587919   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:20.084608   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:20.084628   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:20.084635   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:20.084642   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:20.087275   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:20.087295   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:20.087302   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:20.087308   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:20.087315   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:20.087320   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:20.087325   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:20 GMT
	I0224 01:01:20.087331   21922 round_trippers.go:580]     Audit-Id: aa93f272-50e8-41fd-8eb7-e5279ff6a5ec
	I0224 01:01:20.087709   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:20.087978   21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
	I0224 01:01:20.584338   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:20.584359   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:20.584367   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:20.584373   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:20.587638   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:01:20.587660   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:20.587670   21922 round_trippers.go:580]     Audit-Id: e724f7fd-144d-41ad-833f-d1da3b4a75a6
	I0224 01:01:20.587680   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:20.587688   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:20.587696   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:20.587701   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:20.587707   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:20 GMT
	I0224 01:01:20.588084   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:21.084770   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:21.084792   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:21.084800   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:21.084806   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:21.087215   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:21.087233   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:21.087239   21922 round_trippers.go:580]     Audit-Id: 45b773b9-3d4d-4dd6-bb4e-6b60f9caf176
	I0224 01:01:21.087245   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:21.087251   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:21.087256   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:21.087261   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:21.087267   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:21 GMT
	I0224 01:01:21.087669   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:21.584311   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:21.584339   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:21.584347   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:21.584353   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:21.586563   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:21.586581   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:21.586589   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:21 GMT
	I0224 01:01:21.586594   21922 round_trippers.go:580]     Audit-Id: 3a4c58f4-45f2-4b2e-a3c7-879c2062e2b2
	I0224 01:01:21.586600   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:21.586605   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:21.586612   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:21.586625   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:21.586930   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:22.084668   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:22.084692   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:22.084701   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:22.084707   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:22.087278   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:22.087298   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:22.087306   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:22.087312   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:22.087317   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:22.087323   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:22.087332   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:22 GMT
	I0224 01:01:22.087337   21922 round_trippers.go:580]     Audit-Id: ae197a04-b5b2-42b1-9878-150150dc0c4c
	I0224 01:01:22.087775   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:22.088042   21922 node_ready.go:58] node "multinode-858631" has status "Ready":"False"
	I0224 01:01:22.584035   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:22.584057   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:22.584065   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:22.584071   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:22.586776   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:22.586796   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:22.586803   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:22 GMT
	I0224 01:01:22.586809   21922 round_trippers.go:580]     Audit-Id: 010459f3-6025-4ff9-9e08-60193e099995
	I0224 01:01:22.586818   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:22.586827   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:22.586835   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:22.586844   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:22.587125   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:23.084366   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:23.084389   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:23.084397   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:23.084403   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:23.087520   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:01:23.087536   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:23.087543   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:23.087549   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:23.087554   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:23.087566   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:23.087575   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:23 GMT
	I0224 01:01:23.087585   21922 round_trippers.go:580]     Audit-Id: 038e94a7-0fef-4d4f-8620-4604d14e25ff
	I0224 01:01:23.087811   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"305","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5098 chars]
	I0224 01:01:23.584487   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:23.584509   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:23.584518   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:23.584524   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:23.587307   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:23.587328   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:23.587339   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:23.587349   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:23.587357   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:23.587363   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:23.587368   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:23 GMT
	I0224 01:01:23.587374   21922 round_trippers.go:580]     Audit-Id: b15d1cc7-31d6-4eb2-991f-77b0e1076313
	I0224 01:01:23.587801   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:23.588125   21922 node_ready.go:49] node "multinode-858631" has status "Ready":"True"
	I0224 01:01:23.588142   21922 node_ready.go:38] duration metric: took 10.007287326s waiting for node "multinode-858631" to be "Ready" ...
	I0224 01:01:23.588149   21922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:01:23.588219   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:01:23.588227   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:23.588234   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:23.588240   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:23.591200   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:23.591218   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:23.591227   21922 round_trippers.go:580]     Audit-Id: 58254c31-970b-43fb-a5f9-7b97c167eba5
	I0224 01:01:23.591235   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:23.591244   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:23.591253   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:23.591263   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:23.591276   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:23 GMT
	I0224 01:01:23.592076   21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53937 chars]
	I0224 01:01:23.594870   21922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:23.594923   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:23.594927   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:23.594934   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:23.594941   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:23.596919   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:23.596940   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:23.596950   21922 round_trippers.go:580]     Audit-Id: 7263b68a-6b58-4099-92df-5539157b5de2
	I0224 01:01:23.596958   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:23.596966   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:23.596975   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:23.596992   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:23.597001   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:23 GMT
	I0224 01:01:23.597274   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:23.597827   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:23.597851   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:23.597862   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:23.597871   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:23.599672   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:23.599684   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:23.599690   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:23 GMT
	I0224 01:01:23.599695   21922 round_trippers.go:580]     Audit-Id: 17b216d3-bf4e-45f3-8059-a88bb1d93e80
	I0224 01:01:23.599704   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:23.599712   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:23.599721   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:23.599731   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:23.599882   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:24.100893   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:24.100917   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:24.100928   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:24.100936   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:24.104029   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:01:24.104049   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:24.104060   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:24.104068   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:24.104076   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:24 GMT
	I0224 01:01:24.104086   21922 round_trippers.go:580]     Audit-Id: a119985b-691f-46ca-929a-138d1889bfc8
	I0224 01:01:24.104096   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:24.104106   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:24.104254   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:24.104866   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:24.104885   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:24.104895   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:24.104908   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:24.107175   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:24.107190   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:24.107199   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:24 GMT
	I0224 01:01:24.107207   21922 round_trippers.go:580]     Audit-Id: ee04c0cd-aacd-4c7a-a15b-8bd4a0754dbb
	I0224 01:01:24.107216   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:24.107223   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:24.107232   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:24.107250   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:24.107587   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:24.601291   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:24.601317   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:24.601325   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:24.601331   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:24.603562   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:24.603581   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:24.603590   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:24.603598   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:24 GMT
	I0224 01:01:24.603607   21922 round_trippers.go:580]     Audit-Id: c71d1331-6dd6-4b84-b7f2-0085bc89a790
	I0224 01:01:24.603616   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:24.603625   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:24.603631   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:24.603868   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:24.604264   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:24.604274   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:24.604281   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:24.604287   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:24.608705   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:01:24.608725   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:24.608734   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:24.608743   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:24 GMT
	I0224 01:01:24.608750   21922 round_trippers.go:580]     Audit-Id: 7154afb8-b1e9-4d04-9f66-80e749a371dd
	I0224 01:01:24.608755   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:24.608761   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:24.608766   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:24.608978   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:25.100631   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:25.100653   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:25.100661   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:25.100668   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:25.103193   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:25.103213   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:25.103221   21922 round_trippers.go:580]     Audit-Id: d694012a-3eac-4d86-ab59-d7b7ba9d8d71
	I0224 01:01:25.103227   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:25.103232   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:25.103237   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:25.103242   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:25.103248   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:25 GMT
	I0224 01:01:25.103690   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:25.104084   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:25.104095   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:25.104103   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:25.104109   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:25.106291   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:25.106323   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:25.106334   21922 round_trippers.go:580]     Audit-Id: 274b4dd5-2725-4c3b-929e-2e2b71aefcf1
	I0224 01:01:25.106341   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:25.106350   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:25.106355   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:25.106362   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:25.106369   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:25 GMT
	I0224 01:01:25.106638   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:25.600273   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:25.600300   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:25.600312   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:25.600323   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:25.602430   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:25.602455   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:25.602466   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:25.602475   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:25 GMT
	I0224 01:01:25.602487   21922 round_trippers.go:580]     Audit-Id: 8cd1c153-4732-44b6-bd55-3054e44f6ac3
	I0224 01:01:25.602496   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:25.602509   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:25.602521   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:25.603051   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:25.603685   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:25.603702   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:25.603713   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:25.603724   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:25.605843   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:25.605863   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:25.605872   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:25.605880   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:25.605888   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:25.605903   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:25 GMT
	I0224 01:01:25.605911   21922 round_trippers.go:580]     Audit-Id: 2e4b5ea2-f2f2-4bee-86e8-a582e12e5fdb
	I0224 01:01:25.605920   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:25.606052   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:25.606389   21922 pod_ready.go:102] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"False"
	I0224 01:01:26.100708   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:26.100732   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.100741   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.100747   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.103451   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:26.103467   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.103474   21922 round_trippers.go:580]     Audit-Id: 5c7dfe86-4aa4-40c2-9ee9-3313dae7a357
	I0224 01:01:26.103480   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.103485   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.103490   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.103495   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.103501   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.103894   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"396","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0224 01:01:26.104277   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.104287   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.104294   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.104300   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.106662   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:26.106682   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.106693   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.106710   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.106719   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.106726   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.106734   21922 round_trippers.go:580]     Audit-Id: ef9bae96-0f84-4647-b2e4-a5f5a64a068e
	I0224 01:01:26.106744   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.106851   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.600420   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:01:26.600439   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.600447   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.600454   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.602649   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:26.602674   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.602683   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.602692   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.602701   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.602709   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.602719   21922 round_trippers.go:580]     Audit-Id: e88d8e9c-5391-4a14-941d-de87cfccb39f
	I0224 01:01:26.602727   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.602859   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0224 01:01:26.603477   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.603498   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.603508   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.603518   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.606055   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:26.606074   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.606083   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.606092   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.606101   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.606112   21922 round_trippers.go:580]     Audit-Id: ebb2e441-cc8c-4c0d-b4ca-ed0b45f9b3d2
	I0224 01:01:26.606119   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.606125   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.606293   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.606635   21922 pod_ready.go:92] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:26.606655   21922 pod_ready.go:81] duration metric: took 3.011766765s waiting for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.606664   21922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.606703   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-858631
	I0224 01:01:26.606713   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.606724   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.606737   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.608533   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.608548   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.608558   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.608570   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.608582   21922 round_trippers.go:580]     Audit-Id: b124132b-7462-4604-a671-a0a09a7a5cec
	I0224 01:01:26.608591   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.608603   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.608613   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.608756   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-858631","namespace":"kube-system","uid":"7b4b146b-12c8-4b3f-a682-8ab64a9135cb","resourceVersion":"276","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.mirror":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.seen":"2023-02-24T01:00:59.730785607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
	I0224 01:01:26.609171   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.609183   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.609193   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.609202   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.611270   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:26.611286   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.611292   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.611301   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.611309   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.611319   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.611328   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.611338   21922 round_trippers.go:580]     Audit-Id: 2e18661f-b8dc-435f-a212-7612404a7116
	I0224 01:01:26.611455   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.611702   21922 pod_ready.go:92] pod "etcd-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:26.611714   21922 pod_ready.go:81] duration metric: took 5.043331ms waiting for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.611727   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.611767   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-858631
	I0224 01:01:26.611777   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.611787   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.611796   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.613664   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.613678   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.613685   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.613690   21922 round_trippers.go:580]     Audit-Id: f8ed5e9c-7381-42dc-8d48-fae056e18972
	I0224 01:01:26.613695   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.613704   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.613720   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.613732   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.614050   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-858631","namespace":"kube-system","uid":"ad778dac-86be-4c5e-8b3f-2afb354e374a","resourceVersion":"299","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.mirror":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.seen":"2023-02-24T01:00:59.730814539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0224 01:01:26.614474   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.614486   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.614497   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.614507   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.616149   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.616163   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.616172   21922 round_trippers.go:580]     Audit-Id: 8759fe08-74f9-46b1-a094-932be9a14de5
	I0224 01:01:26.616181   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.616190   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.616200   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.616213   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.616226   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.616345   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.616645   21922 pod_ready.go:92] pod "kube-apiserver-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:26.616659   21922 pod_ready.go:81] duration metric: took 4.925024ms waiting for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.616669   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.616728   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-858631
	I0224 01:01:26.616739   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.616750   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.616763   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.618337   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.618351   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.618362   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.618371   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.618382   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.618394   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.618404   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.618416   21922 round_trippers.go:580]     Audit-Id: 23371965-5757-4789-b4cf-961c7cab57a8
	I0224 01:01:26.618619   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-858631","namespace":"kube-system","uid":"c1e4ec9e-a1e9-4f43-8b1b-95c797d33242","resourceVersion":"272","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.mirror":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.seen":"2023-02-24T01:00:59.730815908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
	I0224 01:01:26.618907   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.618919   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.618929   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.618938   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.620446   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.620460   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.620469   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.620475   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.620480   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.620487   21922 round_trippers.go:580]     Audit-Id: d7721c7a-e679-4406-8a5c-6f5d29bc2451
	I0224 01:01:26.620498   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.620511   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.620728   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.621125   21922 pod_ready.go:92] pod "kube-controller-manager-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:26.621141   21922 pod_ready.go:81] duration metric: took 4.459168ms waiting for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.621149   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.621196   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
	I0224 01:01:26.621206   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.621216   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.621228   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.622885   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.622900   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.622909   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.622918   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.622929   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.622945   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.622954   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.622963   21922 round_trippers.go:580]     Audit-Id: 7c30379b-2e89-445e-99b1-1da9032541bd
	I0224 01:01:26.624804   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vlrn6","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed1ab279-4267-4c3c-a68d-a729dc29f05b","resourceVersion":"367","creationTimestamp":"2023-02-24T01:01:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 01:01:26.625630   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:26.625648   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.625659   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.625669   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.627243   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:01:26.627257   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.627264   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.627270   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.627275   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.627281   21922 round_trippers.go:580]     Audit-Id: c2f4d362-c2cb-4103-921f-08e2de1fd269
	I0224 01:01:26.627286   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.627294   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.627856   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:26.628067   21922 pod_ready.go:92] pod "kube-proxy-vlrn6" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:26.628076   21922 pod_ready.go:81] duration metric: took 6.921536ms waiting for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.628082   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:26.801456   21922 request.go:622] Waited for 173.313227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
	I0224 01:01:26.801519   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
	I0224 01:01:26.801526   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:26.801535   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:26.801543   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:26.805761   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:01:26.805781   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:26.805794   21922 round_trippers.go:580]     Audit-Id: 4f37a918-fae6-4aff-8189-325f152594da
	I0224 01:01:26.805804   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:26.805822   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:26.805830   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:26.805840   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:26.805849   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:26 GMT
	I0224 01:01:26.805979   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-858631","namespace":"kube-system","uid":"fcadaacc-9d90-4113-9bf9-b77ccbc47586","resourceVersion":"294","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.mirror":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.seen":"2023-02-24T01:00:59.730816890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
	I0224 01:01:27.000585   21922 request.go:622] Waited for 194.277725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:27.000635   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:01:27.000652   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.000659   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.000669   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.002807   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:27.002824   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.002831   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.002837   21922 round_trippers.go:580]     Audit-Id: 0f328137-69b3-49f6-af03-7a63cfbf62f8
	I0224 01:01:27.002848   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.002861   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.002877   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.002887   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.003230   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 4953 chars]
	I0224 01:01:27.003488   21922 pod_ready.go:92] pod "kube-scheduler-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:01:27.003504   21922 pod_ready.go:81] duration metric: took 375.411134ms waiting for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:01:27.003514   21922 pod_ready.go:38] duration metric: took 3.41533873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:01:27.003531   21922 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:01:27.003568   21922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:01:27.017182   21922 command_runner.go:130] > 1850
	I0224 01:01:27.017235   21922 api_server.go:71] duration metric: took 14.257786757s to wait for apiserver process to appear ...
	I0224 01:01:27.017248   21922 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:01:27.017256   21922 api_server.go:252] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0224 01:01:27.022214   21922 api_server.go:278] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0224 01:01:27.022265   21922 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0224 01:01:27.022272   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.022287   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.022301   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.023168   21922 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0224 01:01:27.023181   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.023191   21922 round_trippers.go:580]     Content-Length: 263
	I0224 01:01:27.023199   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.023208   21922 round_trippers.go:580]     Audit-Id: d273673d-546c-4f76-8eae-9c9d5b37652c
	I0224 01:01:27.023221   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.023231   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.023245   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.023255   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.023277   21922 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0224 01:01:27.023346   21922 api_server.go:140] control plane version: v1.26.1
	I0224 01:01:27.023360   21922 api_server.go:130] duration metric: took 6.106014ms to wait for apiserver health ...
	I0224 01:01:27.023368   21922 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:01:27.200772   21922 request.go:622] Waited for 177.339524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:01:27.200824   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:01:27.200829   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.200836   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.200843   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.204081   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:01:27.204100   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.204110   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.204118   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.204126   21922 round_trippers.go:580]     Audit-Id: 136f9bae-f175-4e4a-832b-35513f14b820
	I0224 01:01:27.204135   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.204146   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.204157   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.205322   21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
	I0224 01:01:27.207203   21922 system_pods.go:59] 8 kube-system pods found
	I0224 01:01:27.207231   21922 system_pods.go:61] "coredns-787d4945fb-xhwx9" [9d799d4f-0d4b-468e-85ad-052c1735e35c] Running
	I0224 01:01:27.207237   21922 system_pods.go:61] "etcd-multinode-858631" [7b4b146b-12c8-4b3f-a682-8ab64a9135cb] Running
	I0224 01:01:27.207242   21922 system_pods.go:61] "kindnet-cdxbx" [55b36f8b-ffbe-49b3-99fc-aea074319cd0] Running
	I0224 01:01:27.207246   21922 system_pods.go:61] "kube-apiserver-multinode-858631" [ad778dac-86be-4c5e-8b3f-2afb354e374a] Running
	I0224 01:01:27.207257   21922 system_pods.go:61] "kube-controller-manager-multinode-858631" [c1e4ec9e-a1e9-4f43-8b1b-95c797d33242] Running
	I0224 01:01:27.207262   21922 system_pods.go:61] "kube-proxy-vlrn6" [ed1ab279-4267-4c3c-a68d-a729dc29f05b] Running
	I0224 01:01:27.207266   21922 system_pods.go:61] "kube-scheduler-multinode-858631" [fcadaacc-9d90-4113-9bf9-b77ccbc47586] Running
	I0224 01:01:27.207271   21922 system_pods.go:61] "storage-provisioner" [7ec578fe-05c4-4916-8db9-67ee112c136f] Running
	I0224 01:01:27.207275   21922 system_pods.go:74] duration metric: took 183.902698ms to wait for pod list to return data ...
	I0224 01:01:27.207282   21922 default_sa.go:34] waiting for default service account to be created ...
	I0224 01:01:27.400613   21922 request.go:622] Waited for 193.275621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0224 01:01:27.400687   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0224 01:01:27.400695   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.400707   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.400725   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.403265   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:27.403289   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.403299   21922 round_trippers.go:580]     Audit-Id: a8644e44-e42f-43e1-8f33-9f762e161490
	I0224 01:01:27.403306   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.403314   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.403320   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.403331   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.403336   21922 round_trippers.go:580]     Content-Length: 261
	I0224 01:01:27.403344   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.403424   21922 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"7b317ba6-8061-415e-bec5-8cdc2f9b9c04","resourceVersion":"316","creationTimestamp":"2023-02-24T01:01:11Z"}}]}
	I0224 01:01:27.403608   21922 default_sa.go:45] found service account: "default"
	I0224 01:01:27.403621   21922 default_sa.go:55] duration metric: took 196.334205ms for default service account to be created ...
	I0224 01:01:27.403630   21922 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 01:01:27.601068   21922 request.go:622] Waited for 197.377248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:01:27.601126   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:01:27.601131   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.601139   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.601145   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.605257   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:01:27.605278   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.605288   21922 round_trippers.go:580]     Audit-Id: 754f7a46-5b3a-4878-bc23-1aa06e82181b
	I0224 01:01:27.605303   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.605316   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.605328   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.605338   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.605348   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.606456   21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54053 chars]
	I0224 01:01:27.608040   21922 system_pods.go:86] 8 kube-system pods found
	I0224 01:01:27.608060   21922 system_pods.go:89] "coredns-787d4945fb-xhwx9" [9d799d4f-0d4b-468e-85ad-052c1735e35c] Running
	I0224 01:01:27.608067   21922 system_pods.go:89] "etcd-multinode-858631" [7b4b146b-12c8-4b3f-a682-8ab64a9135cb] Running
	I0224 01:01:27.608074   21922 system_pods.go:89] "kindnet-cdxbx" [55b36f8b-ffbe-49b3-99fc-aea074319cd0] Running
	I0224 01:01:27.608080   21922 system_pods.go:89] "kube-apiserver-multinode-858631" [ad778dac-86be-4c5e-8b3f-2afb354e374a] Running
	I0224 01:01:27.608088   21922 system_pods.go:89] "kube-controller-manager-multinode-858631" [c1e4ec9e-a1e9-4f43-8b1b-95c797d33242] Running
	I0224 01:01:27.608095   21922 system_pods.go:89] "kube-proxy-vlrn6" [ed1ab279-4267-4c3c-a68d-a729dc29f05b] Running
	I0224 01:01:27.608107   21922 system_pods.go:89] "kube-scheduler-multinode-858631" [fcadaacc-9d90-4113-9bf9-b77ccbc47586] Running
	I0224 01:01:27.608118   21922 system_pods.go:89] "storage-provisioner" [7ec578fe-05c4-4916-8db9-67ee112c136f] Running
	I0224 01:01:27.608128   21922 system_pods.go:126] duration metric: took 204.491636ms to wait for k8s-apps to be running ...
	I0224 01:01:27.608143   21922 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 01:01:27.608191   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:01:27.622145   21922 system_svc.go:56] duration metric: took 13.995035ms WaitForService to wait for kubelet.
	I0224 01:01:27.622165   21922 kubeadm.go:578] duration metric: took 14.862716251s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 01:01:27.622186   21922 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:01:27.800523   21922 request.go:622] Waited for 178.264445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0224 01:01:27.800594   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0224 01:01:27.800607   21922 round_trippers.go:469] Request Headers:
	I0224 01:01:27.800618   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:01:27.800631   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:01:27.803316   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:01:27.803338   21922 round_trippers.go:577] Response Headers:
	I0224 01:01:27.803348   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:01:27.803356   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:01:27 GMT
	I0224 01:01:27.803366   21922 round_trippers.go:580]     Audit-Id: f760f57d-261d-4597-90e1-e9b04bed9639
	I0224 01:01:27.803378   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:01:27.803389   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:01:27.803406   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:01:27.803846   21922 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"391","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5006 chars]
	I0224 01:01:27.804193   21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:01:27.804215   21922 node_conditions.go:123] node cpu capacity is 2
	I0224 01:01:27.804229   21922 node_conditions.go:105] duration metric: took 182.03459ms to run NodePressure ...
	I0224 01:01:27.804243   21922 start.go:228] waiting for startup goroutines ...
	I0224 01:01:27.804252   21922 start.go:233] waiting for cluster config update ...
	I0224 01:01:27.804269   21922 start.go:242] writing updated cluster config ...
	I0224 01:01:27.806739   21922 out.go:177] 
	I0224 01:01:27.808219   21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:01:27.808303   21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:01:27.809986   21922 out.go:177] * Starting worker node multinode-858631-m02 in cluster multinode-858631
	I0224 01:01:27.811308   21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:01:27.811327   21922 cache.go:57] Caching tarball of preloaded images
	I0224 01:01:27.811412   21922 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:01:27.811425   21922 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 01:01:27.811504   21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:01:27.811651   21922 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:01:27.811677   21922 start.go:364] acquiring machines lock for multinode-858631-m02: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:01:27.811724   21922 start.go:368] acquired machines lock for "multinode-858631-m02" in 29.866µs
	I0224 01:01:27.811747   21922 start.go:93] Provisioning new machine with config: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 01:01:27.811816   21922 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0224 01:01:27.813625   21922 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0224 01:01:27.813705   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:01:27.813734   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:01:27.827402   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I0224 01:01:27.827827   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:01:27.828305   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:01:27.828323   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:01:27.828605   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:01:27.828776   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
	I0224 01:01:27.828894   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:27.829028   21922 start.go:159] libmachine.API.Create for "multinode-858631" (driver="kvm2")
	I0224 01:01:27.829057   21922 client.go:168] LocalClient.Create starting
	I0224 01:01:27.829090   21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem
	I0224 01:01:27.829123   21922 main.go:141] libmachine: Decoding PEM data...
	I0224 01:01:27.829146   21922 main.go:141] libmachine: Parsing certificate...
	I0224 01:01:27.829213   21922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem
	I0224 01:01:27.829239   21922 main.go:141] libmachine: Decoding PEM data...
	I0224 01:01:27.829258   21922 main.go:141] libmachine: Parsing certificate...
	I0224 01:01:27.829289   21922 main.go:141] libmachine: Running pre-create checks...
	I0224 01:01:27.829301   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .PreCreateCheck
	I0224 01:01:27.829458   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
	I0224 01:01:27.829801   21922 main.go:141] libmachine: Creating machine...
	I0224 01:01:27.829817   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .Create
	I0224 01:01:27.829928   21922 main.go:141] libmachine: (multinode-858631-m02) Creating KVM machine...
	I0224 01:01:27.831074   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found existing default KVM network
	I0224 01:01:27.831235   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found existing private KVM network mk-multinode-858631
	I0224 01:01:27.831320   21922 main.go:141] libmachine: (multinode-858631-m02) Setting up store path in /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 ...
	I0224 01:01:27.831349   21922 main.go:141] libmachine: (multinode-858631-m02) Building disk image from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
	I0224 01:01:27.831426   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:27.831318   22156 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:01:27.831522   21922 main.go:141] libmachine: (multinode-858631-m02) Downloading /home/jenkins/minikube-integration/15909-4074/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso...
	I0224 01:01:28.023370   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.023232   22156 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa...
	I0224 01:01:28.179934   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.179815   22156 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/multinode-858631-m02.rawdisk...
	I0224 01:01:28.179982   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Writing magic tar header
	I0224 01:01:28.179999   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Writing SSH key tar header
	I0224 01:01:28.180013   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:28.179931   22156 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 ...
	I0224 01:01:28.180036   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02
	I0224 01:01:28.180094   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02 (perms=drwx------)
	I0224 01:01:28.180109   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube/machines (perms=drwxrwxr-x)
	I0224 01:01:28.180122   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074/.minikube (perms=drwxr-xr-x)
	I0224 01:01:28.180137   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration/15909-4074 (perms=drwxrwxr-x)
	I0224 01:01:28.180153   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube/machines
	I0224 01:01:28.180167   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0224 01:01:28.180182   21922 main.go:141] libmachine: (multinode-858631-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0224 01:01:28.180190   21922 main.go:141] libmachine: (multinode-858631-m02) Creating domain...
	I0224 01:01:28.180203   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:01:28.180217   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15909-4074
	I0224 01:01:28.180235   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0224 01:01:28.180249   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home/jenkins
	I0224 01:01:28.180262   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Checking permissions on dir: /home
	I0224 01:01:28.180279   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Skipping /home - not owner
	I0224 01:01:28.182150   21922 main.go:141] libmachine: (multinode-858631-m02) define libvirt domain using xml: 
	I0224 01:01:28.182175   21922 main.go:141] libmachine: (multinode-858631-m02) <domain type='kvm'>
	I0224 01:01:28.182217   21922 main.go:141] libmachine: (multinode-858631-m02)   <name>multinode-858631-m02</name>
	I0224 01:01:28.182240   21922 main.go:141] libmachine: (multinode-858631-m02)   <memory unit='MiB'>2200</memory>
	I0224 01:01:28.182253   21922 main.go:141] libmachine: (multinode-858631-m02)   <vcpu>2</vcpu>
	I0224 01:01:28.182266   21922 main.go:141] libmachine: (multinode-858631-m02)   <features>
	I0224 01:01:28.182279   21922 main.go:141] libmachine: (multinode-858631-m02)     <acpi/>
	I0224 01:01:28.182290   21922 main.go:141] libmachine: (multinode-858631-m02)     <apic/>
	I0224 01:01:28.182306   21922 main.go:141] libmachine: (multinode-858631-m02)     <pae/>
	I0224 01:01:28.182319   21922 main.go:141] libmachine: (multinode-858631-m02)     
	I0224 01:01:28.182345   21922 main.go:141] libmachine: (multinode-858631-m02)   </features>
	I0224 01:01:28.182359   21922 main.go:141] libmachine: (multinode-858631-m02)   <cpu mode='host-passthrough'>
	I0224 01:01:28.182368   21922 main.go:141] libmachine: (multinode-858631-m02)   
	I0224 01:01:28.182380   21922 main.go:141] libmachine: (multinode-858631-m02)   </cpu>
	I0224 01:01:28.182393   21922 main.go:141] libmachine: (multinode-858631-m02)   <os>
	I0224 01:01:28.182406   21922 main.go:141] libmachine: (multinode-858631-m02)     <type>hvm</type>
	I0224 01:01:28.182420   21922 main.go:141] libmachine: (multinode-858631-m02)     <boot dev='cdrom'/>
	I0224 01:01:28.182431   21922 main.go:141] libmachine: (multinode-858631-m02)     <boot dev='hd'/>
	I0224 01:01:28.182445   21922 main.go:141] libmachine: (multinode-858631-m02)     <bootmenu enable='no'/>
	I0224 01:01:28.182457   21922 main.go:141] libmachine: (multinode-858631-m02)   </os>
	I0224 01:01:28.182470   21922 main.go:141] libmachine: (multinode-858631-m02)   <devices>
	I0224 01:01:28.182483   21922 main.go:141] libmachine: (multinode-858631-m02)     <disk type='file' device='cdrom'>
	I0224 01:01:28.182501   21922 main.go:141] libmachine: (multinode-858631-m02)       <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/boot2docker.iso'/>
	I0224 01:01:28.182514   21922 main.go:141] libmachine: (multinode-858631-m02)       <target dev='hdc' bus='scsi'/>
	I0224 01:01:28.182528   21922 main.go:141] libmachine: (multinode-858631-m02)       <readonly/>
	I0224 01:01:28.182542   21922 main.go:141] libmachine: (multinode-858631-m02)     </disk>
	I0224 01:01:28.182558   21922 main.go:141] libmachine: (multinode-858631-m02)     <disk type='file' device='disk'>
	I0224 01:01:28.182572   21922 main.go:141] libmachine: (multinode-858631-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0224 01:01:28.182590   21922 main.go:141] libmachine: (multinode-858631-m02)       <source file='/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/multinode-858631-m02.rawdisk'/>
	I0224 01:01:28.182603   21922 main.go:141] libmachine: (multinode-858631-m02)       <target dev='hda' bus='virtio'/>
	I0224 01:01:28.182617   21922 main.go:141] libmachine: (multinode-858631-m02)     </disk>
	I0224 01:01:28.182630   21922 main.go:141] libmachine: (multinode-858631-m02)     <interface type='network'>
	I0224 01:01:28.182644   21922 main.go:141] libmachine: (multinode-858631-m02)       <source network='mk-multinode-858631'/>
	I0224 01:01:28.182657   21922 main.go:141] libmachine: (multinode-858631-m02)       <model type='virtio'/>
	I0224 01:01:28.182671   21922 main.go:141] libmachine: (multinode-858631-m02)     </interface>
	I0224 01:01:28.182683   21922 main.go:141] libmachine: (multinode-858631-m02)     <interface type='network'>
	I0224 01:01:28.182697   21922 main.go:141] libmachine: (multinode-858631-m02)       <source network='default'/>
	I0224 01:01:28.182709   21922 main.go:141] libmachine: (multinode-858631-m02)       <model type='virtio'/>
	I0224 01:01:28.182722   21922 main.go:141] libmachine: (multinode-858631-m02)     </interface>
	I0224 01:01:28.182735   21922 main.go:141] libmachine: (multinode-858631-m02)     <serial type='pty'>
	I0224 01:01:28.182748   21922 main.go:141] libmachine: (multinode-858631-m02)       <target port='0'/>
	I0224 01:01:28.182762   21922 main.go:141] libmachine: (multinode-858631-m02)     </serial>
	I0224 01:01:28.182776   21922 main.go:141] libmachine: (multinode-858631-m02)     <console type='pty'>
	I0224 01:01:28.182789   21922 main.go:141] libmachine: (multinode-858631-m02)       <target type='serial' port='0'/>
	I0224 01:01:28.182804   21922 main.go:141] libmachine: (multinode-858631-m02)     </console>
	I0224 01:01:28.182829   21922 main.go:141] libmachine: (multinode-858631-m02)     <rng model='virtio'>
	I0224 01:01:28.182845   21922 main.go:141] libmachine: (multinode-858631-m02)       <backend model='random'>/dev/random</backend>
	I0224 01:01:28.182857   21922 main.go:141] libmachine: (multinode-858631-m02)     </rng>
	I0224 01:01:28.182870   21922 main.go:141] libmachine: (multinode-858631-m02)     
	I0224 01:01:28.182881   21922 main.go:141] libmachine: (multinode-858631-m02)     
	I0224 01:01:28.182894   21922 main.go:141] libmachine: (multinode-858631-m02)   </devices>
	I0224 01:01:28.182906   21922 main.go:141] libmachine: (multinode-858631-m02) </domain>
	I0224 01:01:28.182928   21922 main.go:141] libmachine: (multinode-858631-m02) 
	I0224 01:01:28.189990   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:dc:b5:5f in network default
	I0224 01:01:28.190580   21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring networks are active...
	I0224 01:01:28.190608   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:28.191215   21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring network default is active
	I0224 01:01:28.191474   21922 main.go:141] libmachine: (multinode-858631-m02) Ensuring network mk-multinode-858631 is active
	I0224 01:01:28.191828   21922 main.go:141] libmachine: (multinode-858631-m02) Getting domain xml...
	I0224 01:01:28.192525   21922 main.go:141] libmachine: (multinode-858631-m02) Creating domain...
	I0224 01:01:29.402197   21922 main.go:141] libmachine: (multinode-858631-m02) Waiting to get IP...
	I0224 01:01:29.402891   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:29.403279   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:29.403334   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.403277   22156 retry.go:31] will retry after 277.751048ms: waiting for machine to come up
	I0224 01:01:29.682862   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:29.683259   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:29.683291   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.683204   22156 retry.go:31] will retry after 237.567254ms: waiting for machine to come up
	I0224 01:01:29.922625   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:29.923058   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:29.923089   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:29.922990   22156 retry.go:31] will retry after 445.26408ms: waiting for machine to come up
	I0224 01:01:30.369421   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:30.369931   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:30.369961   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:30.369863   22156 retry.go:31] will retry after 368.046626ms: waiting for machine to come up
	I0224 01:01:30.739335   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:30.739704   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:30.739744   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:30.739661   22156 retry.go:31] will retry after 678.03543ms: waiting for machine to come up
	I0224 01:01:31.419348   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:31.419761   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:31.419790   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:31.419702   22156 retry.go:31] will retry after 740.078986ms: waiting for machine to come up
	I0224 01:01:32.161606   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:32.162114   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:32.162144   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:32.162058   22156 retry.go:31] will retry after 1.178887374s: waiting for machine to come up
	I0224 01:01:33.342862   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:33.343293   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:33.343315   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:33.343246   22156 retry.go:31] will retry after 1.221732807s: waiting for machine to come up
	I0224 01:01:34.566725   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:34.567154   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:34.567178   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:34.567105   22156 retry.go:31] will retry after 1.636230736s: waiting for machine to come up
	I0224 01:01:36.206068   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:36.206429   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:36.206457   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:36.206408   22156 retry.go:31] will retry after 2.225895186s: waiting for machine to come up
	I0224 01:01:38.433607   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:38.434136   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:38.434175   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:38.434071   22156 retry.go:31] will retry after 1.749493158s: waiting for machine to come up
	I0224 01:01:40.185273   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:40.185793   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:40.185835   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:40.185737   22156 retry.go:31] will retry after 3.620543501s: waiting for machine to come up
	I0224 01:01:43.807940   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:43.808314   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:43.808341   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:43.808276   22156 retry.go:31] will retry after 2.729278179s: waiting for machine to come up
	I0224 01:01:46.541068   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:46.541435   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find current IP address of domain multinode-858631-m02 in network mk-multinode-858631
	I0224 01:01:46.541457   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | I0224 01:01:46.541384   22156 retry.go:31] will retry after 3.976325501s: waiting for machine to come up
	I0224 01:01:50.519773   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.520108   21922 main.go:141] libmachine: (multinode-858631-m02) Found IP for machine: 192.168.39.3
	I0224 01:01:50.520123   21922 main.go:141] libmachine: (multinode-858631-m02) Reserving static IP address...
	I0224 01:01:50.520133   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.520515   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | unable to find host DHCP lease matching {name: "multinode-858631-m02", mac: "52:54:00:14:f2:a2", ip: "192.168.39.3"} in network mk-multinode-858631
	I0224 01:01:50.589344   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Getting to WaitForSSH function...
	I0224 01:01:50.589371   21922 main.go:141] libmachine: (multinode-858631-m02) Reserved static IP address: 192.168.39.3
	I0224 01:01:50.589385   21922 main.go:141] libmachine: (multinode-858631-m02) Waiting for SSH to be available...
	I0224 01:01:50.592231   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.592702   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:50.592738   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.592848   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using SSH client type: external
	I0224 01:01:50.592877   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa (-rw-------)
	I0224 01:01:50.592912   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:01:50.592928   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | About to run SSH command:
	I0224 01:01:50.592945   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | exit 0
	I0224 01:01:50.676906   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | SSH cmd err, output: <nil>: 
	I0224 01:01:50.677171   21922 main.go:141] libmachine: (multinode-858631-m02) KVM machine creation complete!
	I0224 01:01:50.677418   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
	I0224 01:01:50.677919   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:50.678124   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:50.678275   21922 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 01:01:50.678292   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetState
	I0224 01:01:50.679599   21922 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 01:01:50.679611   21922 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 01:01:50.679617   21922 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 01:01:50.679624   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:50.681959   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.682294   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:50.682319   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.682503   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:50.682681   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.682831   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.682964   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:50.683121   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:50.683540   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:50.683553   21922 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 01:01:50.788270   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:01:50.788292   21922 main.go:141] libmachine: Detecting the provisioner...
	I0224 01:01:50.788303   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:50.790952   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.791303   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:50.791339   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.791472   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:50.791655   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.791792   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.791932   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:50.792084   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:50.792473   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:50.792485   21922 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 01:01:50.901589   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g41e8300-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0224 01:01:50.901644   21922 main.go:141] libmachine: found compatible host: buildroot
	I0224 01:01:50.901658   21922 main.go:141] libmachine: Provisioning with buildroot...
	I0224 01:01:50.901672   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
	I0224 01:01:50.901935   21922 buildroot.go:166] provisioning hostname "multinode-858631-m02"
	I0224 01:01:50.901957   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
	I0224 01:01:50.902138   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:50.904729   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.905065   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:50.905094   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:50.905236   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:50.905408   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.905589   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:50.905757   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:50.905943   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:50.906346   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:50.906364   21922 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-858631-m02 && echo "multinode-858631-m02" | sudo tee /etc/hostname
	I0224 01:01:51.024628   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-858631-m02
	
	I0224 01:01:51.024665   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.027088   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.027556   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.027583   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.027756   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:51.027917   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.028078   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.028175   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:51.028319   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:51.028778   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:51.028799   21922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-858631-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-858631-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-858631-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:01:51.145030   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:01:51.145057   21922 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:01:51.145069   21922 buildroot.go:174] setting up certificates
	I0224 01:01:51.145076   21922 provision.go:83] configureAuth start
	I0224 01:01:51.145084   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetMachineName
	I0224 01:01:51.145339   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
	I0224 01:01:51.148109   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.148443   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.148471   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.148591   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.150795   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.151048   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.151089   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.151222   21922 provision.go:138] copyHostCerts
	I0224 01:01:51.151252   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:01:51.151287   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:01:51.151295   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:01:51.151365   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:01:51.151431   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:01:51.151448   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:01:51.151454   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:01:51.151475   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:01:51.151514   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:01:51.151529   21922 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:01:51.151535   21922 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:01:51.151553   21922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:01:51.151595   21922 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.multinode-858631-m02 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube multinode-858631-m02]
	I0224 01:01:51.235724   21922 provision.go:172] copyRemoteCerts
	I0224 01:01:51.235773   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:01:51.235792   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.238284   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.238619   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.238654   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.238793   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:51.238963   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.239115   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:51.239218   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
	I0224 01:01:51.326431   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 01:01:51.326502   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:01:51.347801   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 01:01:51.347857   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 01:01:51.369122   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 01:01:51.369173   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 01:01:51.391247   21922 provision.go:86] duration metric: configureAuth took 246.161676ms
	I0224 01:01:51.391272   21922 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:01:51.391462   21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:01:51.391495   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:51.391757   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.394377   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.394683   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.394708   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.394856   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:51.395024   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.395162   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.395281   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:51.395495   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:51.395904   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:51.395918   21922 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:01:51.502533   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:01:51.502555   21922 buildroot.go:70] root file system type: tmpfs
	I0224 01:01:51.502674   21922 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:01:51.502695   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.505118   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.505450   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.505490   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.505633   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:51.505792   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.505923   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.506001   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:51.506103   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:51.506474   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:51.506533   21922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.217"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:01:51.626613   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.217
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:01:51.626641   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:51.629332   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.629732   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:51.629762   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:51.630006   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:51.630174   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.630350   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:51.630504   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:51.630657   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:51.631035   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:51.631051   21922 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:01:52.326910   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0224 01:01:52.326936   21922 main.go:141] libmachine: Checking connection to Docker...
	I0224 01:01:52.326947   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetURL
	I0224 01:01:52.327935   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | Using libvirt version 6000000
	I0224 01:01:52.330148   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.330536   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.330561   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.330765   21922 main.go:141] libmachine: Docker is up and running!
	I0224 01:01:52.330779   21922 main.go:141] libmachine: Reticulating splines...
	I0224 01:01:52.330787   21922 client.go:171] LocalClient.Create took 24.50171783s
	I0224 01:01:52.330804   21922 start.go:167] duration metric: libmachine.API.Create for "multinode-858631" took 24.501778325s
	I0224 01:01:52.330812   21922 start.go:300] post-start starting for "multinode-858631-m02" (driver="kvm2")
	I0224 01:01:52.330817   21922 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:01:52.330833   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:52.331079   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:01:52.331111   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:52.333582   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.333978   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.334004   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.334162   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:52.334335   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:52.334476   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:52.334605   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
	I0224 01:01:52.417937   21922 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:01:52.422286   21922 command_runner.go:130] > NAME=Buildroot
	I0224 01:01:52.422300   21922 command_runner.go:130] > VERSION=2021.02.12-1-g41e8300-dirty
	I0224 01:01:52.422305   21922 command_runner.go:130] > ID=buildroot
	I0224 01:01:52.422313   21922 command_runner.go:130] > VERSION_ID=2021.02.12
	I0224 01:01:52.422324   21922 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0224 01:01:52.422355   21922 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:01:52.422373   21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:01:52.422434   21922 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:01:52.422508   21922 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:01:52.422517   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /etc/ssl/certs/111312.pem
	I0224 01:01:52.422595   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:01:52.430179   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:01:52.453345   21922 start.go:303] post-start completed in 122.521368ms
	I0224 01:01:52.453391   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetConfigRaw
	I0224 01:01:52.453911   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
	I0224 01:01:52.456385   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.456761   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.456784   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.457027   21922 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/config.json ...
	I0224 01:01:52.457185   21922 start.go:128] duration metric: createHost completed in 24.645361196s
	I0224 01:01:52.457206   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:52.459250   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.459680   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.459717   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.459867   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:52.460059   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:52.460219   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:52.460362   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:52.460539   21922 main.go:141] libmachine: Using SSH client type: native
	I0224 01:01:52.460978   21922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0224 01:01:52.460991   21922 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0224 01:01:52.569554   21922 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677200512.542153926
	
	I0224 01:01:52.569573   21922 fix.go:207] guest clock: 1677200512.542153926
	I0224 01:01:52.569582   21922 fix.go:220] Guest: 2023-02-24 01:01:52.542153926 +0000 UTC Remote: 2023-02-24 01:01:52.457195612 +0000 UTC m=+104.572643378 (delta=84.958314ms)
	I0224 01:01:52.569598   21922 fix.go:191] guest clock delta is within tolerance: 84.958314ms
	I0224 01:01:52.569604   21922 start.go:83] releasing machines lock for "multinode-858631-m02", held for 24.757869416s
	I0224 01:01:52.569628   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:52.569863   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
	I0224 01:01:52.572193   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.572559   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.572588   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.574873   21922 out.go:177] * Found network options:
	I0224 01:01:52.576032   21922 out.go:177]   - NO_PROXY=192.168.39.217
	W0224 01:01:52.577039   21922 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 01:01:52.577081   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:52.577553   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:52.577725   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:01:52.577793   21922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:01:52.577823   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	W0224 01:01:52.577872   21922 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 01:01:52.577916   21922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 01:01:52.577928   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:01:52.580299   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.580645   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.580674   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.580692   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.580817   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:52.580975   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:52.581117   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:01:52.581141   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:01:52.581141   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:52.581309   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:01:52.581302   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
	I0224 01:01:52.581466   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:01:52.581619   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:01:52.581752   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
	I0224 01:01:52.659795   21922 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0224 01:01:52.660067   21922 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:01:52.660131   21922 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:01:52.686380   21922 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 01:01:52.686438   21922 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 01:01:52.686563   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:01:52.694913   21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:01:52.710495   21922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:01:52.723985   21922 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0224 01:01:52.724048   21922 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 01:01:52.724063   21922 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:01:52.724141   21922 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:01:52.749723   21922 docker.go:630] Got preloaded images: 
	I0224 01:01:52.749743   21922 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0224 01:01:52.749788   21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:01:52.757997   21922 command_runner.go:139] > {"Repositories":{}}
	I0224 01:01:52.758079   21922 ssh_runner.go:195] Run: which lz4
	I0224 01:01:52.761357   21922 command_runner.go:130] > /usr/bin/lz4
	I0224 01:01:52.761385   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0224 01:01:52.761454   21922 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0224 01:01:52.765118   21922 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:01:52.765250   21922 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:01:52.765272   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0224 01:01:54.420974   21922 docker.go:594] Took 1.659537 seconds to copy over tarball
	I0224 01:01:54.421035   21922 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 01:01:57.163251   21922 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.742191243s)
	I0224 01:01:57.163279   21922 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 01:01:57.199728   21922 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:01:57.208743   21922 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.9.3":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.6-0":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c":"sha256:fce326961ae2d51a5f726883fd59d
2a8c2ccc3e45d3bb859882db58e422e59e7"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.26.1":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.26.1":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.26.1":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed0
3c2c3b26b70fd"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.26.1":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0224 01:01:57.208899   21922 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0224 01:01:57.224814   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:01:57.327699   21922 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:01:59.963289   21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.635549978s)
	I0224 01:01:59.963339   21922 start.go:485] detecting cgroup driver to use...
	I0224 01:01:59.963437   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:01:59.986195   21922 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 01:01:59.986222   21922 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 01:01:59.986296   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:01:59.998783   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:02:00.007596   21922 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:02:00.007657   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:02:00.016595   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:02:00.025320   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:02:00.034095   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:02:00.042911   21922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:02:00.051732   21922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:02:00.060301   21922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:02:00.067936   21922 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 01:02:00.067984   21922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:02:00.075579   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:02:00.171082   21922 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:02:00.188454   21922 start.go:485] detecting cgroup driver to use...
	I0224 01:02:00.188522   21922 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:02:00.210571   21922 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0224 01:02:00.210594   21922 command_runner.go:130] > [Unit]
	I0224 01:02:00.210603   21922 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 01:02:00.210611   21922 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 01:02:00.210620   21922 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0224 01:02:00.210627   21922 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0224 01:02:00.210638   21922 command_runner.go:130] > StartLimitBurst=3
	I0224 01:02:00.210649   21922 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 01:02:00.210657   21922 command_runner.go:130] > [Service]
	I0224 01:02:00.210667   21922 command_runner.go:130] > Type=notify
	I0224 01:02:00.210673   21922 command_runner.go:130] > Restart=on-failure
	I0224 01:02:00.210684   21922 command_runner.go:130] > Environment=NO_PROXY=192.168.39.217
	I0224 01:02:00.210695   21922 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 01:02:00.210707   21922 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 01:02:00.210715   21922 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 01:02:00.210724   21922 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 01:02:00.210730   21922 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 01:02:00.210739   21922 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 01:02:00.210746   21922 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 01:02:00.210758   21922 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 01:02:00.210766   21922 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 01:02:00.210772   21922 command_runner.go:130] > ExecStart=
	I0224 01:02:00.210785   21922 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0224 01:02:00.210792   21922 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 01:02:00.210798   21922 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 01:02:00.210807   21922 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 01:02:00.210814   21922 command_runner.go:130] > LimitNOFILE=infinity
	I0224 01:02:00.210822   21922 command_runner.go:130] > LimitNPROC=infinity
	I0224 01:02:00.210826   21922 command_runner.go:130] > LimitCORE=infinity
	I0224 01:02:00.210831   21922 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 01:02:00.210839   21922 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 01:02:00.210843   21922 command_runner.go:130] > TasksMax=infinity
	I0224 01:02:00.210849   21922 command_runner.go:130] > TimeoutStartSec=0
	I0224 01:02:00.210858   21922 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 01:02:00.210861   21922 command_runner.go:130] > Delegate=yes
	I0224 01:02:00.210869   21922 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 01:02:00.210875   21922 command_runner.go:130] > KillMode=process
	I0224 01:02:00.210884   21922 command_runner.go:130] > [Install]
	I0224 01:02:00.210889   21922 command_runner.go:130] > WantedBy=multi-user.target
	I0224 01:02:00.210945   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:02:00.223572   21922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:02:00.241615   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:02:00.253308   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:02:00.264535   21922 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0224 01:02:00.293560   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:02:00.305828   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:02:00.323783   21922 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 01:02:00.323805   21922 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 01:02:00.324209   21922 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:02:00.426533   21922 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:02:00.528862   21922 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:02:00.528899   21922 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:02:00.545354   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:02:00.646170   21922 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:02:01.991521   21922 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.345312876s)
	I0224 01:02:01.991674   21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:02:02.095408   21922 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:02:02.202313   21922 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:02:02.308387   21922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:02:02.413315   21922 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:02:02.428724   21922 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 01:02:02.428788   21922 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 01:02:02.433754   21922 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 01:02:02.433769   21922 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 01:02:02.433775   21922 command_runner.go:130] > Device: 16h/22d	Inode: 982         Links: 1
	I0224 01:02:02.433784   21922 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0224 01:02:02.433793   21922 command_runner.go:130] > Access: 2023-02-24 01:02:02.409160681 +0000
	I0224 01:02:02.433801   21922 command_runner.go:130] > Modify: 2023-02-24 01:02:02.409160681 +0000
	I0224 01:02:02.433812   21922 command_runner.go:130] > Change: 2023-02-24 01:02:02.411162454 +0000
	I0224 01:02:02.433822   21922 command_runner.go:130] >  Birth: -
	I0224 01:02:02.433996   21922 start.go:553] Will wait 60s for crictl version
	I0224 01:02:02.434045   21922 ssh_runner.go:195] Run: which crictl
	I0224 01:02:02.437424   21922 command_runner.go:130] > /usr/bin/crictl
	I0224 01:02:02.437483   21922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 01:02:02.538455   21922 command_runner.go:130] > Version:  0.1.0
	I0224 01:02:02.538493   21922 command_runner.go:130] > RuntimeName:  docker
	I0224 01:02:02.538502   21922 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0224 01:02:02.538510   21922 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 01:02:02.538932   21922 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0224 01:02:02.538999   21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:02:02.570135   21922 command_runner.go:130] > 20.10.23
	I0224 01:02:02.570215   21922 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:02:02.599099   21922 command_runner.go:130] > 20.10.23
	I0224 01:02:02.602493   21922 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0224 01:02:02.604074   21922 out.go:177]   - env NO_PROXY=192.168.39.217
	I0224 01:02:02.605399   21922 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
	I0224 01:02:02.608005   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:02:02.608335   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:02:02.608357   21922 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:02:02.608543   21922 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0224 01:02:02.612457   21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:02:02.624406   21922 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631 for IP: 192.168.39.3
	I0224 01:02:02.624427   21922 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:02:02.624540   21922 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
	I0224 01:02:02.624580   21922 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
	I0224 01:02:02.624592   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 01:02:02.624605   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 01:02:02.624618   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 01:02:02.624631   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 01:02:02.624680   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
	W0224 01:02:02.624707   21922 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
	I0224 01:02:02.624717   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 01:02:02.624755   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
	I0224 01:02:02.624782   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
	I0224 01:02:02.624804   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
	I0224 01:02:02.624841   21922 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:02:02.624866   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem -> /usr/share/ca-certificates/11131.pem
	I0224 01:02:02.624883   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> /usr/share/ca-certificates/111312.pem
	I0224 01:02:02.624895   21922 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:02:02.625168   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 01:02:02.646667   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 01:02:02.668358   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 01:02:02.690086   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 01:02:02.714912   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
	I0224 01:02:02.738896   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
	I0224 01:02:02.763215   21922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 01:02:02.788917   21922 ssh_runner.go:195] Run: openssl version
	I0224 01:02:02.794215   21922 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0224 01:02:02.794488   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 01:02:02.804509   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:02:02.808922   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:02:02.808946   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:02:02.808985   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:02:02.814327   21922 command_runner.go:130] > b5213941
	I0224 01:02:02.814513   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 01:02:02.824104   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
	I0224 01:02:02.833352   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
	I0224 01:02:02.837742   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:02:02.838118   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:02:02.838153   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
	I0224 01:02:02.843274   21922 command_runner.go:130] > 51391683
	I0224 01:02:02.843309   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
	I0224 01:02:02.852208   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
	I0224 01:02:02.861585   21922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
	I0224 01:02:02.865768   21922 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:02:02.866257   21922 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:02:02.866298   21922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
	I0224 01:02:02.871503   21922 command_runner.go:130] > 3ec20f2e
	I0224 01:02:02.871554   21922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 01:02:02.880594   21922 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 01:02:02.914528   21922 command_runner.go:130] > cgroupfs
	I0224 01:02:02.914576   21922 cni.go:84] Creating CNI manager for ""
	I0224 01:02:02.914593   21922 cni.go:136] 2 nodes found, recommending kindnet
	I0224 01:02:02.914611   21922 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 01:02:02.914637   21922 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-858631 NodeName:multinode-858631-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 01:02:02.914761   21922 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-858631-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 01:02:02.914827   21922 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-858631-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 01:02:02.914886   21922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 01:02:02.922875   21922 command_runner.go:130] > kubeadm
	I0224 01:02:02.922889   21922 command_runner.go:130] > kubectl
	I0224 01:02:02.922895   21922 command_runner.go:130] > kubelet
	I0224 01:02:02.923167   21922 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 01:02:02.925191   21922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0224 01:02:02.934316   21922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0224 01:02:02.949693   21922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 01:02:02.964775   21922 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0224 01:02:02.968499   21922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:02:02.980060   21922 host.go:66] Checking if "multinode-858631" exists ...
	I0224 01:02:02.980294   21922 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:02:02.980413   21922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:02:02.980465   21922 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:02:02.994585   21922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
	I0224 01:02:02.994927   21922 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:02:02.995351   21922 main.go:141] libmachine: Using API Version  1
	I0224 01:02:02.995370   21922 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:02:02.995664   21922 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:02:02.995820   21922 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:02:02.995966   21922 start.go:301] JoinCluster: &{Name:multinode-858631 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.26.1 ClusterName:multinode-858631 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:02:02.996049   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0224 01:02:02.996069   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:02:02.998945   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:02:02.999334   21922 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:02:02.999361   21922 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:02:02.999494   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:02:02.999654   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:02:02.999787   21922 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:02:02.999900   21922 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:02:03.187838   21922 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 
	I0224 01:02:03.191840   21922 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 01:02:03.191878   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-858631-m02"
	I0224 01:02:03.303186   21922 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 01:02:03.554006   21922 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0224 01:02:03.554029   21922 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0224 01:02:03.590491   21922 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 01:02:03.590522   21922 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 01:02:03.590531   21922 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 01:02:03.698835   21922 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0224 01:02:05.219161   21922 command_runner.go:130] > This node has joined the cluster:
	I0224 01:02:05.219190   21922 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0224 01:02:05.219200   21922 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0224 01:02:05.219210   21922 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0224 01:02:05.220747   21922 command_runner.go:130] ! W0224 01:02:03.287797    1271 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 01:02:05.220772   21922 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 01:02:05.221125   21922 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jpub3d.g9uycynvwqj91385 --discovery-token-ca-cert-hash sha256:ffed4a97d00853d225d0ff07158c2bc3f749ee93cc75ad31fd39c6be0c93fde1 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-858631-m02": (2.029224613s)
	I0224 01:02:05.221156   21922 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0224 01:02:05.458105   21922 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0224 01:02:05.458139   21922 start.go:303] JoinCluster complete in 2.462175128s
	I0224 01:02:05.458149   21922 cni.go:84] Creating CNI manager for ""
	I0224 01:02:05.458154   21922 cni.go:136] 2 nodes found, recommending kindnet
	I0224 01:02:05.458194   21922 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 01:02:05.463696   21922 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 01:02:05.463724   21922 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0224 01:02:05.463733   21922 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0224 01:02:05.463744   21922 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 01:02:05.463752   21922 command_runner.go:130] > Access: 2023-02-24 01:00:20.396182736 +0000
	I0224 01:02:05.463761   21922 command_runner.go:130] > Modify: 2023-02-16 22:59:55.000000000 +0000
	I0224 01:02:05.463770   21922 command_runner.go:130] > Change: 2023-02-24 01:00:18.603182736 +0000
	I0224 01:02:05.463773   21922 command_runner.go:130] >  Birth: -
	I0224 01:02:05.463863   21922 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 01:02:05.463878   21922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 01:02:05.480372   21922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 01:02:05.758307   21922 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0224 01:02:05.758335   21922 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0224 01:02:05.758343   21922 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0224 01:02:05.758350   21922 command_runner.go:130] > daemonset.apps/kindnet configured
	I0224 01:02:05.758773   21922 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:02:05.758991   21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:02:05.759246   21922 round_trippers.go:463] GET https://192.168.39.217:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 01:02:05.759256   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:05.759264   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:05.759270   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:05.760960   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:02:05.760980   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:05.760986   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:05.760992   21922 round_trippers.go:580]     Content-Length: 291
	I0224 01:02:05.760997   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:05 GMT
	I0224 01:02:05.761003   21922 round_trippers.go:580]     Audit-Id: b6753881-ac46-4058-9359-5b36abe09428
	I0224 01:02:05.761009   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:05.761014   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:05.761020   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:05.761038   21922 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"1feec0bc-8f6f-4ed8-8e86-04a25e711058","resourceVersion":"416","creationTimestamp":"2023-02-24T01:00:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 01:02:05.761095   21922 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-858631" context rescaled to 1 replicas
	I0224 01:02:05.761117   21922 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 01:02:05.763182   21922 out.go:177] * Verifying Kubernetes components...
	I0224 01:02:05.764385   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:02:05.778613   21922 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:02:05.778913   21922 kapi.go:59] client config for multinode-858631: &rest.Config{Host:"https://192.168.39.217:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/multinode-858631/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:02:05.779259   21922 node_ready.go:35] waiting up to 6m0s for node "multinode-858631-m02" to be "Ready" ...
	I0224 01:02:05.779327   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:05.779337   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:05.779350   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:05.779361   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:05.781214   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:02:05.781236   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:05.781247   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:05 GMT
	I0224 01:02:05.781256   21922 round_trippers.go:580]     Audit-Id: 8c29cb5b-ca99-4f8b-9b49-4db4f3341e6b
	I0224 01:02:05.781265   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:05.781274   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:05.781286   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:05.781298   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:05.781435   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"476","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3987 chars]
	I0224 01:02:06.282053   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:06.282073   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:06.282082   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:06.282092   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:06.286304   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:02:06.286327   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:06.286336   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:06.286342   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:06.286347   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:06.286353   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:06 GMT
	I0224 01:02:06.286359   21922 round_trippers.go:580]     Audit-Id: 93b5a00a-aa54-43cc-a4d7-0889df01d6d6
	I0224 01:02:06.286364   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:06.286625   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:06.782050   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:06.782079   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:06.782090   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:06.782105   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:06.787083   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:02:06.787111   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:06.787122   21922 round_trippers.go:580]     Audit-Id: 40a918e8-cef0-4ea6-b073-7e1e2ccd4bff
	I0224 01:02:06.787131   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:06.787139   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:06.787148   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:06.787158   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:06.787164   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:06 GMT
	I0224 01:02:06.787344   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:07.282686   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:07.282708   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:07.282717   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:07.282723   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:07.285275   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:07.285303   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:07.285314   21922 round_trippers.go:580]     Audit-Id: 15f290ee-ba9a-4585-ae28-7c45df7dec0e
	I0224 01:02:07.285323   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:07.285333   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:07.285342   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:07.285350   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:07.285359   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:07 GMT
	I0224 01:02:07.285509   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:07.782037   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:07.782061   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:07.782075   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:07.782083   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:07.784401   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:07.784426   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:07.784436   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:07.784445   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:07.784454   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:07.784464   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:07 GMT
	I0224 01:02:07.784472   21922 round_trippers.go:580]     Audit-Id: 652b872b-7c19-4a9d-a726-7a63aeb72144
	I0224 01:02:07.784489   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:07.784673   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:07.784932   21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
	I0224 01:02:08.282034   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:08.282057   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:08.282069   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:08.282078   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:08.284448   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:08.284473   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:08.284483   21922 round_trippers.go:580]     Audit-Id: 2f93f6ce-ce6e-4228-bb07-9e38a91554c7
	I0224 01:02:08.284492   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:08.284506   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:08.284518   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:08.284525   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:08.284533   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:08 GMT
	I0224 01:02:08.284701   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:08.782368   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:08.782395   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:08.782406   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:08.782413   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:08.785295   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:08.785315   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:08.785325   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:08.785334   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:08 GMT
	I0224 01:02:08.785341   21922 round_trippers.go:580]     Audit-Id: 7c93d20c-db45-4e4d-88cc-2d314e25e39a
	I0224 01:02:08.785349   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:08.785358   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:08.785371   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:08.785926   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:09.282621   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:09.282649   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:09.282657   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:09.282663   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:09.285036   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:09.285061   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:09.285071   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:09 GMT
	I0224 01:02:09.285081   21922 round_trippers.go:580]     Audit-Id: 4ff62658-dc4c-4656-b29b-cc96e459d5dd
	I0224 01:02:09.285090   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:09.285101   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:09.285114   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:09.285126   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:09.285264   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:09.783004   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:09.783028   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:09.783036   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:09.783042   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:09.785312   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:09.785328   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:09.785335   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:09.785341   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:09 GMT
	I0224 01:02:09.785350   21922 round_trippers.go:580]     Audit-Id: 80b8f0bd-af67-4e75-87f0-5f163521c4e7
	I0224 01:02:09.785355   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:09.785368   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:09.785376   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:09.785685   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:09.785940   21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
	I0224 01:02:10.282329   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:10.282355   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:10.282365   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:10.282374   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:10.284712   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:10.284730   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:10.284737   21922 round_trippers.go:580]     Audit-Id: 5a7eadab-920a-4585-8976-f61660a5ae54
	I0224 01:02:10.284743   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:10.284748   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:10.284754   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:10.284762   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:10.284770   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:10 GMT
	I0224 01:02:10.285190   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:10.782911   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:10.782939   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:10.782951   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:10.782960   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:10.785111   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:10.785135   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:10.785147   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:10.785155   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:10.785163   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:10.785171   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:10 GMT
	I0224 01:02:10.785179   21922 round_trippers.go:580]     Audit-Id: 1be1ec50-92d5-4955-bbd8-dc15edf8cd74
	I0224 01:02:10.785187   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:10.785609   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:11.282226   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:11.282254   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:11.282264   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:11.282272   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:11.284394   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:11.284415   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:11.284423   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:11.284429   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:11.284435   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:11.284440   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:11 GMT
	I0224 01:02:11.284446   21922 round_trippers.go:580]     Audit-Id: 7dd30b71-b225-4ba5-a6e4-bdc37eb82a93
	I0224 01:02:11.284452   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:11.284557   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:11.782084   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:11.782107   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:11.782115   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:11.782122   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:11.784525   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:11.784548   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:11.784559   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:11 GMT
	I0224 01:02:11.784567   21922 round_trippers.go:580]     Audit-Id: f304a726-4324-4302-ad9c-d5b091415fea
	I0224 01:02:11.784576   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:11.784584   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:11.784596   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:11.784605   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:11.784807   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:12.282842   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:12.282870   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:12.282879   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:12.282885   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:12.285157   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:12.285180   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:12.285190   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:12.285199   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:12.285207   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:12.285216   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:12.285228   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:12 GMT
	I0224 01:02:12.285238   21922 round_trippers.go:580]     Audit-Id: dc7c542e-1450-465c-abb6-40ba4f5772b0
	I0224 01:02:12.285493   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:12.285874   21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
	I0224 01:02:12.782696   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:12.782716   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:12.782724   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:12.782730   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:12.785264   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:12.785288   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:12.785298   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:12.785307   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:12 GMT
	I0224 01:02:12.785315   21922 round_trippers.go:580]     Audit-Id: 73f7401d-1343-4018-add1-f0e611b621ca
	I0224 01:02:12.785328   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:12.785340   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:12.785348   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:12.785558   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:13.282993   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:13.283026   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:13.283038   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:13.283048   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:13.285725   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:13.285743   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:13.285753   21922 round_trippers.go:580]     Audit-Id: a32c6ee2-8f25-43b6-82ab-80f87c8f8d46
	I0224 01:02:13.285759   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:13.285764   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:13.285769   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:13.285776   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:13.285785   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:13 GMT
	I0224 01:02:13.286231   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:13.782956   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:13.782983   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:13.782996   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:13.783006   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:13.785805   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:13.785835   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:13.785842   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:13.785847   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:13.785855   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:13.785860   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:13 GMT
	I0224 01:02:13.785866   21922 round_trippers.go:580]     Audit-Id: 2603e4e6-fdf1-45ef-88af-cfa11296d9b7
	I0224 01:02:13.785875   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:13.786049   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:14.282694   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:14.282721   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:14.282732   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:14.282739   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:14.285702   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:14.285726   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:14.285736   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:14.285746   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:14 GMT
	I0224 01:02:14.285754   21922 round_trippers.go:580]     Audit-Id: 55557df8-6a0c-45aa-b1af-a5e1a0a0278c
	I0224 01:02:14.285764   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:14.285772   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:14.285787   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:14.285903   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"479","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 4096 chars]
	I0224 01:02:14.286244   21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
	I0224 01:02:14.782677   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:14.782710   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:14.782723   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:14.782733   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:14.786057   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:02:14.786079   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:14.786090   21922 round_trippers.go:580]     Audit-Id: 612cef84-f190-4adf-ad57-d39992e3c8a6
	I0224 01:02:14.786098   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:14.786107   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:14.786116   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:14.786134   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:14.786143   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:14 GMT
	I0224 01:02:14.786222   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:15.282788   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:15.282810   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:15.282819   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:15.282825   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:15.285349   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:15.285363   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:15.285370   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:15.285377   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:15 GMT
	I0224 01:02:15.285390   21922 round_trippers.go:580]     Audit-Id: aae233cb-3e01-4ca5-854c-98eef75f4982
	I0224 01:02:15.285404   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:15.285416   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:15.285427   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:15.285677   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:15.782340   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:15.782363   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:15.782371   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:15.782377   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:15.784840   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:15.784864   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:15.784875   21922 round_trippers.go:580]     Audit-Id: 3197293c-c98c-4d57-8f8b-4db53f94813c
	I0224 01:02:15.784884   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:15.784891   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:15.784897   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:15.784904   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:15.784910   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:15 GMT
	I0224 01:02:15.785575   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:16.282165   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:16.282197   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:16.282206   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:16.282212   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:16.284645   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:16.284669   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:16.284679   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:16 GMT
	I0224 01:02:16.284690   21922 round_trippers.go:580]     Audit-Id: cedcf71b-4e25-4d70-a91f-0054be8450a3
	I0224 01:02:16.284702   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:16.284710   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:16.284719   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:16.284728   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:16.284890   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:16.782504   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:16.782529   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:16.782549   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:16.782556   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:16.785126   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:16.785150   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:16.785159   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:16.785167   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:16 GMT
	I0224 01:02:16.785174   21922 round_trippers.go:580]     Audit-Id: 5e75f94f-2e8f-4d17-bf56-659cfbd413d1
	I0224 01:02:16.785181   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:16.785189   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:16.785197   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:16.785463   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:16.785728   21922 node_ready.go:58] node "multinode-858631-m02" has status "Ready":"False"
	I0224 01:02:17.282739   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:17.282771   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.282783   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.282793   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.285133   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.285155   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.285162   21922 round_trippers.go:580]     Audit-Id: 9d749906-ff8f-4edb-95dc-af80d4a9dc8b
	I0224 01:02:17.285168   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.285174   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.285179   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.285185   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.285190   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.285510   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"502","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4265 chars]
	I0224 01:02:17.782150   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:17.782171   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.782179   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.782186   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.785063   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.785082   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.785089   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.785095   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.785100   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.785106   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.785111   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.785123   21922 round_trippers.go:580]     Audit-Id: e8e42f89-79fa-4bed-80ba-b62b6eb17a9c
	I0224 01:02:17.785438   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"507","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4131 chars]
	I0224 01:02:17.785694   21922 node_ready.go:49] node "multinode-858631-m02" has status "Ready":"True"
	I0224 01:02:17.785715   21922 node_ready.go:38] duration metric: took 12.006435326s waiting for node "multinode-858631-m02" to be "Ready" ...
	I0224 01:02:17.785727   21922 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:02:17.785783   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0224 01:02:17.785791   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.785798   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.785808   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.789843   21922 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 01:02:17.789865   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.789876   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.789885   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.789893   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.789905   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.789912   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.789921   21922 round_trippers.go:580]     Audit-Id: 8b6bcfe1-d72c-4886-bb04-bb9b256b5aef
	I0224 01:02:17.791583   21922 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67422 chars]
	I0224 01:02:17.793521   21922 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.793582   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-xhwx9
	I0224 01:02:17.793593   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.793600   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.793609   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.799554   21922 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 01:02:17.799572   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.799579   21922 round_trippers.go:580]     Audit-Id: 46d3468f-9227-4b5a-a90c-cd831810d0db
	I0224 01:02:17.799585   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.799592   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.799601   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.799618   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.799627   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.799797   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-xhwx9","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"9d799d4f-0d4b-468e-85ad-052c1735e35c","resourceVersion":"412","creationTimestamp":"2023-02-24T01:01:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"48366e19-6f4d-4478-b1a5-1ebb02afef5a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48366e19-6f4d-4478-b1a5-1ebb02afef5a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0224 01:02:17.800303   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:17.800319   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.800329   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.800344   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.802385   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.802401   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.802411   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.802419   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.802427   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.802440   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.802450   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.802462   21922 round_trippers.go:580]     Audit-Id: b6dae04e-a272-405b-b624-fe25000cc924
	I0224 01:02:17.802717   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:17.803072   21922 pod_ready.go:92] pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:17.803085   21922 pod_ready.go:81] duration metric: took 9.54594ms waiting for pod "coredns-787d4945fb-xhwx9" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.803095   21922 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.803146   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-858631
	I0224 01:02:17.803156   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.803163   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.803172   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.805776   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.805793   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.805803   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.805810   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.805827   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.805840   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.805854   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.805869   21922 round_trippers.go:580]     Audit-Id: f8cf8076-d6d3-4ab4-832e-1152316006db
	I0224 01:02:17.806008   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-858631","namespace":"kube-system","uid":"7b4b146b-12c8-4b3f-a682-8ab64a9135cb","resourceVersion":"276","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.217:2379","kubernetes.io/config.hash":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.mirror":"dc4f8bffc9d97af45e685dda88cd2a94","kubernetes.io/config.seen":"2023-02-24T01:00:59.730785607Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5856 chars]
	I0224 01:02:17.806426   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:17.806440   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.806451   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.806464   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.807966   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:02:17.807981   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.807991   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.808007   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.808020   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.808029   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.808042   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.808055   21922 round_trippers.go:580]     Audit-Id: 458c5759-0a1f-4fa0-b545-c6e2bb2aafdb
	I0224 01:02:17.808215   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:17.808460   21922 pod_ready.go:92] pod "etcd-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:17.808472   21922 pod_ready.go:81] duration metric: took 5.368739ms waiting for pod "etcd-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.808491   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.808541   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-858631
	I0224 01:02:17.808551   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.808562   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.808576   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.811060   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.811079   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.811088   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.811097   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.811110   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.811120   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.811133   21922 round_trippers.go:580]     Audit-Id: 4c81b437-0719-49a3-8ac1-9d49b1ee705b
	I0224 01:02:17.811144   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.811278   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-858631","namespace":"kube-system","uid":"ad778dac-86be-4c5e-8b3f-2afb354e374a","resourceVersion":"299","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.217:8443","kubernetes.io/config.hash":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.mirror":"2a1bcd287381cc62f4271365e9d57dba","kubernetes.io/config.seen":"2023-02-24T01:00:59.730814539Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7392 chars]
	I0224 01:02:17.811585   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:17.811597   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.811607   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.811619   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.813796   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.813814   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.813823   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.813832   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.813841   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.813853   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.813865   21922 round_trippers.go:580]     Audit-Id: cc6448e0-1cbd-452b-b1a3-54709f329fa1
	I0224 01:02:17.813884   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.813984   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:17.814228   21922 pod_ready.go:92] pod "kube-apiserver-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:17.814239   21922 pod_ready.go:81] duration metric: took 5.738671ms waiting for pod "kube-apiserver-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.814249   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.814286   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-858631
	I0224 01:02:17.814295   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.814305   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.814316   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.815755   21922 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 01:02:17.815772   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.815781   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.815790   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.815805   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.815814   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.815825   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.815833   21922 round_trippers.go:580]     Audit-Id: fa46faeb-e840-4d29-93f1-01d3abcac42b
	I0224 01:02:17.815953   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-858631","namespace":"kube-system","uid":"c1e4ec9e-a1e9-4f43-8b1b-95c797d33242","resourceVersion":"272","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.mirror":"cb3b8d57c02f5e81e5a272ffb5f3fbe3","kubernetes.io/config.seen":"2023-02-24T01:00:59.730815908Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6957 chars]
	I0224 01:02:17.816252   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:17.816264   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.816275   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.816285   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.818448   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:17.818466   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.818476   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.818492   21922 round_trippers.go:580]     Audit-Id: b6ab3b64-242f-4cbe-b61a-4c2c449d202b
	I0224 01:02:17.818504   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.818512   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.818524   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.818536   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.818635   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:17.818868   21922 pod_ready.go:92] pod "kube-controller-manager-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:17.818878   21922 pod_ready.go:81] duration metric: took 4.622614ms waiting for pod "kube-controller-manager-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.818889   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:17.982775   21922 request.go:622] Waited for 163.835849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
	I0224 01:02:17.982833   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vlrn6
	I0224 01:02:17.982838   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:17.982846   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:17.982852   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:17.986506   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:02:17.986524   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:17.986531   21922 round_trippers.go:580]     Audit-Id: 1136dba7-3b62-4a56-aa8a-9ab2da34bd7b
	I0224 01:02:17.986537   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:17.986543   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:17.986550   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:17.986561   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:17.986571   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:17 GMT
	I0224 01:02:17.986698   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vlrn6","generateName":"kube-proxy-","namespace":"kube-system","uid":"ed1ab279-4267-4c3c-a68d-a729dc29f05b","resourceVersion":"367","creationTimestamp":"2023-02-24T01:01:11Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 01:02:18.182818   21922 request.go:622] Waited for 195.711259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:18.182864   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:18.182869   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:18.182877   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:18.182883   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:18.185313   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:18.185332   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:18.185339   21922 round_trippers.go:580]     Audit-Id: ef86a469-d957-4cc8-893b-98dbce25c375
	I0224 01:02:18.185345   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:18.185351   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:18.185356   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:18.185362   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:18.185367   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:18 GMT
	I0224 01:02:18.185572   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:18.185972   21922 pod_ready.go:92] pod "kube-proxy-vlrn6" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:18.185987   21922 pod_ready.go:81] duration metric: took 367.092131ms waiting for pod "kube-proxy-vlrn6" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:18.185999   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvgw" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:18.382990   21922 request.go:622] Waited for 196.924858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgw
	I0224 01:02:18.383063   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvgw
	I0224 01:02:18.383069   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:18.383080   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:18.383102   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:18.385775   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:18.385798   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:18.385808   21922 round_trippers.go:580]     Audit-Id: f8e8bc7d-14a9-433a-ad47-8581e9ac35be
	I0224 01:02:18.385829   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:18.385838   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:18.385850   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:18.385863   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:18.385879   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:18 GMT
	I0224 01:02:18.386590   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wrvgw","generateName":"kube-proxy-","namespace":"kube-system","uid":"1b634754-3905-4781-b367-af19b8dd4e3d","resourceVersion":"491","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4ec6a9ff-44a2-44e8-9e3b-270212238f31","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec6a9ff-44a2-44e8-9e3b-270212238f31\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 01:02:18.582481   21922 request.go:622] Waited for 195.368039ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:18.582533   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631-m02
	I0224 01:02:18.582538   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:18.582556   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:18.582565   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:18.584860   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:18.584878   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:18.584884   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:18.584890   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:18.584898   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:18.584907   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:18 GMT
	I0224 01:02:18.584916   21922 round_trippers.go:580]     Audit-Id: 34bd7715-f9f7-43af-b872-b6cb187fbd72
	I0224 01:02:18.584925   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:18.585047   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631-m02","uid":"dd5339b8-b185-48ad-a871-7f3cca66e634","resourceVersion":"507","creationTimestamp":"2023-02-24T01:02:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:02:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4131 chars]
	I0224 01:02:18.585411   21922 pod_ready.go:92] pod "kube-proxy-wrvgw" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:18.585428   21922 pod_ready.go:81] duration metric: took 399.421408ms waiting for pod "kube-proxy-wrvgw" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:18.585440   21922 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:18.782907   21922 request.go:622] Waited for 197.411303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
	I0224 01:02:18.782960   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-858631
	I0224 01:02:18.782964   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:18.782979   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:18.782988   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:18.786814   21922 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 01:02:18.786841   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:18.786853   21922 round_trippers.go:580]     Audit-Id: 8c52f398-49c4-4043-8b97-f9250c82333f
	I0224 01:02:18.786862   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:18.786870   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:18.786879   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:18.786887   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:18.786896   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:18 GMT
	I0224 01:02:18.787246   21922 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-858631","namespace":"kube-system","uid":"fcadaacc-9d90-4113-9bf9-b77ccbc47586","resourceVersion":"294","creationTimestamp":"2023-02-24T01:01:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.mirror":"a679af228396ab9ab09a15d1ab16cad8","kubernetes.io/config.seen":"2023-02-24T01:00:59.730816890Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T01:01:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4687 chars]
	I0224 01:02:18.983025   21922 request.go:622] Waited for 195.394792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:18.983083   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/multinode-858631
	I0224 01:02:18.983089   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:18.983099   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:18.983108   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:18.985643   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:18.985671   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:18.985682   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:18.985691   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:18.985703   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:18 GMT
	I0224 01:02:18.985716   21922 round_trippers.go:580]     Audit-Id: 0b617519-9b21-410f-8e96-32f6b761a6a0
	I0224 01:02:18.985728   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:18.985745   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:18.985936   21922 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T01:00:56Z","fieldsType":"FieldsV1","fi [truncated 5115 chars]
	I0224 01:02:18.986380   21922 pod_ready.go:92] pod "kube-scheduler-multinode-858631" in "kube-system" namespace has status "Ready":"True"
	I0224 01:02:18.986402   21922 pod_ready.go:81] duration metric: took 400.954409ms waiting for pod "kube-scheduler-multinode-858631" in "kube-system" namespace to be "Ready" ...
	I0224 01:02:18.986418   21922 pod_ready.go:38] duration metric: took 1.200674232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:02:18.986441   21922 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 01:02:18.986490   21922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:02:19.000491   21922 system_svc.go:56] duration metric: took 14.043504ms WaitForService to wait for kubelet.
	I0224 01:02:19.000516   21922 kubeadm.go:578] duration metric: took 13.239381651s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 01:02:19.000542   21922 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:02:19.182971   21922 request.go:622] Waited for 182.359111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0224 01:02:19.183041   21922 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0224 01:02:19.183051   21922 round_trippers.go:469] Request Headers:
	I0224 01:02:19.183065   21922 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0224 01:02:19.183078   21922 round_trippers.go:473]     Accept: application/json, */*
	I0224 01:02:19.185880   21922 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 01:02:19.185904   21922 round_trippers.go:577] Response Headers:
	I0224 01:02:19.185912   21922 round_trippers.go:580]     Content-Type: application/json
	I0224 01:02:19.185918   21922 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6cc955f0-985b-4ac1-a1eb-a1cbc81e9169
	I0224 01:02:19.185923   21922 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9403f208-f0e9-4e17-9fae-379b3359ab90
	I0224 01:02:19.185929   21922 round_trippers.go:580]     Date: Fri, 24 Feb 2023 01:02:19 GMT
	I0224 01:02:19.185935   21922 round_trippers.go:580]     Audit-Id: d3a7e02b-d364-4066-af5d-43f8fc3d19a1
	I0224 01:02:19.185948   21922 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 01:02:19.186293   21922 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"multinode-858631","uid":"f3ba32e9-6c16-4daf-be0f-a52045fc7b5b","resourceVersion":"421","creationTimestamp":"2023-02-24T01:00:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-858631","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c13299ce0b45f38f7f45d3bc31124c3ea59c0510","minikube.k8s.io/name":"multinode-858631","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T01_01_00_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10291 chars]
	I0224 01:02:19.186775   21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:02:19.186792   21922 node_conditions.go:123] node cpu capacity is 2
	I0224 01:02:19.186801   21922 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:02:19.186807   21922 node_conditions.go:123] node cpu capacity is 2
	I0224 01:02:19.186823   21922 node_conditions.go:105] duration metric: took 186.275755ms to run NodePressure ...
	I0224 01:02:19.186836   21922 start.go:228] waiting for startup goroutines ...
	I0224 01:02:19.186861   21922 start.go:242] writing updated cluster config ...
	I0224 01:02:19.187143   21922 ssh_runner.go:195] Run: rm -f paused
	I0224 01:02:19.236616   21922 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 01:02:19.239167   21922 out.go:177] * Done! kubectl is now configured to use "multinode-858631" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-02-24 01:00:19 UTC, ends at Fri 2023-02-24 01:03:51 UTC. --
	Feb 24 01:01:19 multinode-858631 dockerd[971]: time="2023-02-24T01:01:19.151874365Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ebafedff9cf6c1fa9d9f3ff5d68acb3bb78ff26a73d0566344ce0badc8c3958e pid=4737 runtime=io.containerd.runc.v2
	Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563170227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563665339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.563732396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:01:23 multinode-858631 dockerd[971]: time="2023-02-24T01:01:23.564250584Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/69850c46e8835d3ecea60364f44420c7bc5fc0d2fb9cce1e77292520a6704954 pid=5334 runtime=io.containerd.runc.v2
	Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.029971166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030013919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030023180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:01:24 multinode-858631 dockerd[971]: time="2023-02-24T01:01:24.030470862Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/789f0dcee13fe65f35246d61a8e20169e3f809bdf16534c169f1d396a8c87a45 pid=5374 runtime=io.containerd.runc.v2
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038254723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038337994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.038349351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.041426903Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/16cccc3389964c69e4d80432441960c0308efe13b6105bc10ffbb73ab2cb03ef pid=5421 runtime=io.containerd.runc.v2
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621713957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621950548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.621962934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:01:25 multinode-858631 dockerd[971]: time="2023-02-24T01:01:25.622519588Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/891f0c08a14f74d64b797ffdbd7bacd02c74123a5e93fedf501d88b5168855f3 pid=5509 runtime=io.containerd.runc.v2
	Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.387973381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.388493762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.388653008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:02:20 multinode-858631 dockerd[971]: time="2023-02-24T01:02:20.389029743Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/18d4897e34fa98abde0b25295106273c2f2ae532a4b0b82c55369cb706b48759 pid=6155 runtime=io.containerd.runc.v2
	Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045153538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045368227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045386353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:02:22 multinode-858631 dockerd[971]: time="2023-02-24T01:02:22.045692276Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/56e8dbc10ef7074cba39c2add7f494709d957afb08b98cbeae360fce25491229 pid=6258 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	56e8dbc10ef70       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   18d4897e34fa9
	891f0c08a14f7       5185b96f0becf                                                                                         2 minutes ago        Running             coredns                   0                   16cccc3389964
	789f0dcee13fe       6e38f40d628db                                                                                         2 minutes ago        Running             storage-provisioner       0                   69850c46e8835
	ebafedff9cf6c       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              2 minutes ago        Running             kindnet-cni               0                   34babfb41e50e
	d7b643bcdb886       46a6bb3c77ce0                                                                                         2 minutes ago        Running             kube-proxy                0                   33bc4ce6f85a9
	520690a3de270       fce326961ae2d                                                                                         2 minutes ago        Running             etcd                      0                   11e92749f86d2
	eaf66574e9cb6       655493523f607                                                                                         2 minutes ago        Running             kube-scheduler            0                   9606ff8ef5f99
	fe09023de51d1       deb04688c4a35                                                                                         2 minutes ago        Running             kube-apiserver            0                   43df63791385e
	1236d9f622921       e9c08e11b07f6                                                                                         3 minutes ago        Running             kube-controller-manager   0                   f72fb4f6682a7
	
	* 
	* ==> coredns [891f0c08a14f] <==
	* [INFO] 10.244.1.2:57404 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179108s
	[INFO] 10.244.0.3:56962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204166s
	[INFO] 10.244.0.3:32914 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001956131s
	[INFO] 10.244.0.3:60276 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177058s
	[INFO] 10.244.0.3:37837 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092369s
	[INFO] 10.244.0.3:50061 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001304911s
	[INFO] 10.244.0.3:37475 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162047s
	[INFO] 10.244.0.3:45734 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103713s
	[INFO] 10.244.0.3:52349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147778s
	[INFO] 10.244.1.2:57924 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190014s
	[INFO] 10.244.1.2:34536 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016461s
	[INFO] 10.244.1.2:44915 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093146s
	[INFO] 10.244.1.2:43414 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000159306s
	[INFO] 10.244.0.3:41918 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096715s
	[INFO] 10.244.0.3:37576 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162934s
	[INFO] 10.244.0.3:45202 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051871s
	[INFO] 10.244.0.3:35260 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085738s
	[INFO] 10.244.1.2:55209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139804s
	[INFO] 10.244.1.2:35152 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000293214s
	[INFO] 10.244.1.2:37368 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196166s
	[INFO] 10.244.1.2:38431 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000278777s
	[INFO] 10.244.0.3:42198 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151079s
	[INFO] 10.244.0.3:36646 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000063033s
	[INFO] 10.244.0.3:41803 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008872s
	[INFO] 10.244.0.3:47687 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076956s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-858631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-858631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=multinode-858631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T01_01_00_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:00:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-858631
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:03:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:02:31 +0000   Fri, 24 Feb 2023 01:00:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:02:31 +0000   Fri, 24 Feb 2023 01:00:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:02:31 +0000   Fri, 24 Feb 2023 01:00:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:02:31 +0000   Fri, 24 Feb 2023 01:01:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    multinode-858631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8133424a1c7c4d738ea9a601f818107b
	  System UUID:                8133424a-1c7c-4d73-8ea9-a601f818107b
	  Boot ID:                    705d2688-47a8-48c7-bcc6-909f1595be50
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-pmnbg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 coredns-787d4945fb-xhwx9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m39s
	  kube-system                 etcd-multinode-858631                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m51s
	  kube-system                 kindnet-cdxbx                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m40s
	  kube-system                 kube-apiserver-multinode-858631             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-controller-manager-multinode-858631    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-proxy-vlrn6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-scheduler-multinode-858631             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x5 over 3m3s)  kubelet          Node multinode-858631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x5 over 3m3s)  kubelet          Node multinode-858631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x5 over 3m3s)  kubelet          Node multinode-858631 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m52s                kubelet          Node multinode-858631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s                kubelet          Node multinode-858631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s                kubelet          Node multinode-858631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m40s                node-controller  Node multinode-858631 event: Registered Node multinode-858631 in Controller
	  Normal  NodeReady                2m28s                kubelet          Node multinode-858631 status is now: NodeReady
	
	
	Name:               multinode-858631-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-858631-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:02:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-858631-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:02:34 +0000   Fri, 24 Feb 2023 01:02:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:02:34 +0000   Fri, 24 Feb 2023 01:02:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:02:34 +0000   Fri, 24 Feb 2023 01:02:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:02:34 +0000   Fri, 24 Feb 2023 01:02:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    multinode-858631-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c880124de1874ee0a51f88b5a4f1ece3
	  System UUID:                c880124d-e187-4ee0-a51f-88b5a4f1ece3
	  Boot ID:                    45d06bec-2c61-42ea-bc55-ed9d3f47ea39
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-bkl2m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kindnet-hhfkf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      108s
	  kube-system                 kube-proxy-wrvgw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x2 over 108s)  kubelet          Node multinode-858631-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x2 over 108s)  kubelet          Node multinode-858631-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x2 over 108s)  kubelet          Node multinode-858631-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node multinode-858631-m02 event: Registered Node multinode-858631-m02 in Controller
	  Normal  NodeReady                95s                  kubelet          Node multinode-858631-m02 status is now: NodeReady
	
	
	Name:               multinode-858631-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-858631-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:03:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-858631-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:03:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:03:18 +0000   Fri, 24 Feb 2023 01:03:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:03:18 +0000   Fri, 24 Feb 2023 01:03:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:03:18 +0000   Fri, 24 Feb 2023 01:03:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:03:18 +0000   Fri, 24 Feb 2023 01:03:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    multinode-858631-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c2f15898560409c9b865744cb819409
	  System UUID:                5c2f1589-8560-409c-9b86-5744cb819409
	  Boot ID:                    d01f101c-bcd6-4283-bfb8-f8dddf12cd04
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c942r       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      48s
	  kube-system                 kube-proxy-9rnd6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 44s                kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x2 over 48s)  kubelet          Node multinode-858631-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x2 over 48s)  kubelet          Node multinode-858631-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x2 over 48s)  kubelet          Node multinode-858631-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           46s                node-controller  Node multinode-858631-m03 event: Registered Node multinode-858631-m03 in Controller
	  Normal  NodeReady                34s                kubelet          Node multinode-858631-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.070223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.956879] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.182341] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147123] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.038236] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.466179] systemd-fstab-generator[546]: Ignoring "noauto" for root device
	[  +0.103218] systemd-fstab-generator[557]: Ignoring "noauto" for root device
	[  +5.331047] systemd-fstab-generator[735]: Ignoring "noauto" for root device
	[  +3.220071] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.331990] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +0.243658] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.109723] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.110176] systemd-fstab-generator[956]: Ignoring "noauto" for root device
	[  +1.445023] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.105479] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.099493] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
	[  +0.109382] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
	[  +4.843925] systemd-fstab-generator[1387]: Ignoring "noauto" for root device
	[  +0.532969] kauditd_printk_skb: 68 callbacks suppressed
	[ +11.250819] systemd-fstab-generator[2141]: Ignoring "noauto" for root device
	[Feb24 01:01] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.570692] kauditd_printk_skb: 12 callbacks suppressed
	
	* 
	* ==> etcd [520690a3de27] <==
	* {"level":"warn","ts":"2023-02-24T01:01:58.699Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.310Z","time spent":"389.612126ms","remote":"127.0.0.1:43764","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.217\" mod_revision:435 > success:<request_put:<key:\"/registry/masterleases/192.168.39.217\" value_size:67 lease:8213869183733117084 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.217\" > >"}
	{"level":"warn","ts":"2023-02-24T01:01:58.700Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"357.645808ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2023-02-24T01:01:58.700Z","caller":"traceutil/trace.go:171","msg":"trace[1246393292] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:443; }","duration":"357.819676ms","start":"2023-02-24T01:01:58.342Z","end":"2023-02-24T01:01:58.700Z","steps":["trace[1246393292] 'agreement among raft nodes before linearized reading'  (duration: 357.422478ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:01:58.700Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.342Z","time spent":"357.901917ms","remote":"127.0.0.1:43786","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2023-02-24T01:01:58.700Z","caller":"traceutil/trace.go:171","msg":"trace[609908551] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:443; }","duration":"238.555602ms","start":"2023-02-24T01:01:58.461Z","end":"2023-02-24T01:01:58.700Z","steps":["trace[609908551] 'agreement among raft nodes before linearized reading'  (duration: 238.221795ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:01:59.245Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"412.806902ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437241220587892898 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:441 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-24T01:01:59.245Z","caller":"traceutil/trace.go:171","msg":"trace[1863625836] linearizableReadLoop","detail":"{readStateIndex:468; appliedIndex:467; }","duration":"540.070693ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.245Z","steps":["trace[1863625836] 'read index received'  (duration: 127.125993ms)","trace[1863625836] 'applied index is now lower than readState.Index'  (duration: 412.944139ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[617464482] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"540.459827ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[617464482] 'process raft request'  (duration: 127.325664ms)","trace[617464482] 'compare'  (duration: 412.606398ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.705Z","time spent":"540.605719ms","remote":"127.0.0.1:43786","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:441 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"508.205128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[1904230364] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:444; }","duration":"508.25585ms","start":"2023-02-24T01:01:58.738Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[1904230364] 'agreement among raft nodes before linearized reading'  (duration: 508.176727ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.738Z","time spent":"508.289738ms","remote":"127.0.0.1:43774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":31,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true "}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.744677ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[498940617] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:444; }","duration":"383.762281ms","start":"2023-02-24T01:01:58.862Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[498940617] 'agreement among raft nodes before linearized reading'  (duration: 383.730465ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.862Z","time spent":"383.797863ms","remote":"127.0.0.1:43846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"540.837991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2023-02-24T01:01:59.246Z","caller":"traceutil/trace.go:171","msg":"trace[477372589] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:444; }","duration":"540.855928ms","start":"2023-02-24T01:01:58.705Z","end":"2023-02-24T01:01:59.246Z","steps":["trace[477372589] 'agreement among raft nodes before linearized reading'  (duration: 540.785906ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:01:59.246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:01:58.705Z","time spent":"540.888721ms","remote":"127.0.0.1:43764","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" "}
	{"level":"info","ts":"2023-02-24T01:02:57.662Z","caller":"traceutil/trace.go:171","msg":"trace[1926505598] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"106.410423ms","start":"2023-02-24T01:02:57.556Z","end":"2023-02-24T01:02:57.662Z","steps":["trace[1926505598] 'process raft request'  (duration: 106.240224ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:02:58.262Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"137.76133ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437241220587893391 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.217\" mod_revision:571 > success:<request_put:<key:\"/registry/masterleases/192.168.39.217\" value_size:67 lease:8213869183733117581 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.217\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-02-24T01:02:58.262Z","caller":"traceutil/trace.go:171","msg":"trace[900094601] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"272.052506ms","start":"2023-02-24T01:02:57.990Z","end":"2023-02-24T01:02:58.262Z","steps":["trace[900094601] 'process raft request'  (duration: 133.948282ms)","trace[900094601] 'compare'  (duration: 137.467001ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:02:58.682Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"127.377973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-24T01:02:58.683Z","caller":"traceutil/trace.go:171","msg":"trace[1870110251] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:580; }","duration":"127.517896ms","start":"2023-02-24T01:02:58.555Z","end":"2023-02-24T01:02:58.682Z","steps":["trace[1870110251] 'count revisions from in-memory index tree'  (duration: 127.125493ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:02:59.960Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.041703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-02-24T01:02:59.960Z","caller":"traceutil/trace.go:171","msg":"trace[1707009120] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:582; }","duration":"133.100368ms","start":"2023-02-24T01:02:59.826Z","end":"2023-02-24T01:02:59.960Z","steps":["trace[1707009120] 'count revisions from in-memory index tree'  (duration: 132.8787ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:03:52 up 3 min,  0 users,  load average: 0.79, 0.34, 0.13
	Linux multinode-858631 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [ebafedff9cf6] <==
	* I0224 01:03:09.999545       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.240 Flags: [] Table: 0} 
	I0224 01:03:20.015146       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0224 01:03:20.015263       1 main.go:227] handling current node
	I0224 01:03:20.015279       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0224 01:03:20.015285       1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24] 
	I0224 01:03:20.016402       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0224 01:03:20.016517       1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24] 
	I0224 01:03:30.022477       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0224 01:03:30.022503       1 main.go:227] handling current node
	I0224 01:03:30.022518       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0224 01:03:30.022523       1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24] 
	I0224 01:03:30.022700       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0224 01:03:30.022707       1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24] 
	I0224 01:03:40.036081       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0224 01:03:40.036101       1 main.go:227] handling current node
	I0224 01:03:40.036124       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0224 01:03:40.036131       1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24] 
	I0224 01:03:40.036365       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0224 01:03:40.036374       1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24] 
	I0224 01:03:50.046065       1 main.go:223] Handling node with IPs: map[192.168.39.217:{}]
	I0224 01:03:50.046135       1 main.go:227] handling current node
	I0224 01:03:50.046159       1 main.go:223] Handling node with IPs: map[192.168.39.3:{}]
	I0224 01:03:50.046169       1 main.go:250] Node multinode-858631-m02 has CIDR [10.244.1.0/24] 
	I0224 01:03:50.046417       1 main.go:223] Handling node with IPs: map[192.168.39.240:{}]
	I0224 01:03:50.046459       1 main.go:250] Node multinode-858631-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [fe09023de51d] <==
	* I0224 01:00:57.118105       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 01:00:57.122640       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 01:00:57.122653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:00:57.697510       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:00:57.737565       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 01:00:57.884803       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 01:00:57.894325       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0224 01:00:57.895479       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 01:00:57.900008       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 01:00:58.173957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:00:59.585983       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:00:59.602264       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 01:00:59.617070       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:01:11.580462       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 01:01:11.879154       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0224 01:01:58.701378       1 trace.go:219] Trace[368084194]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.217,type:*v1.Endpoints,resource:apiServerIPInfo (24-Feb-2023 01:01:58.181) (total time: 519ms):
	Trace[368084194]: ---"Transaction prepared" 127ms (01:01:58.309)
	Trace[368084194]: ---"Txn call completed" 391ms (01:01:58.701)
	Trace[368084194]: [519.939338ms] [519.939338ms] END
	I0224 01:01:59.247137       1 trace.go:219] Trace[990533136]: "Update" accept:application/json, */*,audit-id:7b546f7a-c601-41e1-b10e-1e85ad5601b2,client:192.168.39.217,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (24-Feb-2023 01:01:58.703) (total time: 543ms):
	Trace[990533136]: ["GuaranteedUpdate etcd3" audit-id:7b546f7a-c601-41e1-b10e-1e85ad5601b2,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 542ms (01:01:58.704)
	Trace[990533136]:  ---"Txn call completed" 541ms (01:01:59.246)]
	Trace[990533136]: [543.349416ms] [543.349416ms] END
	I0224 01:01:59.248065       1 trace.go:219] Trace[97825507]: "List(recursive=true) etcd3" audit-id:,key:/masterleases/,resourceVersion:0,resourceVersionMatch:NotOlderThan,limit:0,continue: (24-Feb-2023 01:01:58.704) (total time: 543ms):
	Trace[97825507]: [543.458244ms] [543.458244ms] END
	
	* 
	* ==> kube-controller-manager [1236d9f62292] <==
	* I0224 01:01:11.908557       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cdxbx"
	I0224 01:01:12.043584       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-pjnpp"
	I0224 01:01:12.054853       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-xhwx9"
	I0224 01:01:12.264558       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 01:01:12.300154       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-pjnpp"
	I0224 01:01:26.048745       1 node_lifecycle_controller.go:1231] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0224 01:02:04.183578       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-858631-m02" does not exist
	I0224 01:02:04.211788       1 range_allocator.go:372] Set node multinode-858631-m02 PodCIDR to [10.244.1.0/24]
	I0224 01:02:04.219106       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wrvgw"
	I0224 01:02:04.219332       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hhfkf"
	W0224 01:02:06.054580       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-858631-m02. Assuming now as a timestamp.
	I0224 01:02:06.054696       1 event.go:294] "Event occurred" object="multinode-858631-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-858631-m02 event: Registered Node multinode-858631-m02 in Controller"
	W0224 01:02:17.579624       1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
	I0224 01:02:19.907868       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 01:02:19.927765       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-bkl2m"
	I0224 01:02:19.945949       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-pmnbg"
	I0224 01:02:21.071176       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48-bkl2m" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-6b86dd6d48-bkl2m"
	W0224 01:03:04.815016       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-858631-m03" does not exist
	W0224 01:03:04.816913       1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
	I0224 01:03:04.833299       1 range_allocator.go:372] Set node multinode-858631-m03 PodCIDR to [10.244.2.0/24]
	I0224 01:03:04.843519       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-c942r"
	I0224 01:03:04.843573       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9rnd6"
	W0224 01:03:06.079261       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-858631-m03. Assuming now as a timestamp.
	I0224 01:03:06.079626       1 event.go:294] "Event occurred" object="multinode-858631-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-858631-m03 event: Registered Node multinode-858631-m03 in Controller"
	W0224 01:03:18.075812       1 topologycache.go:232] Can't get CPU or zone information for multinode-858631-m02 node
	
	* 
	* ==> kube-proxy [d7b643bcdb88] <==
	* I0224 01:01:13.121900       1 node.go:163] Successfully retrieved node IP: 192.168.39.217
	I0224 01:01:13.125308       1 server_others.go:109] "Detected node IP" address="192.168.39.217"
	I0224 01:01:13.125365       1 server_others.go:535] "Using iptables proxy"
	I0224 01:01:13.255848       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0224 01:01:13.255890       1 server_others.go:176] "Using iptables Proxier"
	I0224 01:01:13.255924       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 01:01:13.256178       1 server.go:655] "Version info" version="v1.26.1"
	I0224 01:01:13.256281       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:01:13.259007       1 config.go:317] "Starting service config controller"
	I0224 01:01:13.259018       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 01:01:13.259042       1 config.go:226] "Starting endpoint slice config controller"
	I0224 01:01:13.259045       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 01:01:13.263080       1 config.go:444] "Starting node config controller"
	I0224 01:01:13.263163       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 01:01:13.359315       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 01:01:13.359358       1 shared_informer.go:280] Caches are synced for service config
	I0224 01:01:13.364105       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [eaf66574e9cb] <==
	* E0224 01:00:56.249644       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 01:00:56.249652       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0224 01:00:56.250444       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 01:00:56.250457       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 01:00:56.250465       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 01:00:56.250471       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 01:00:56.250477       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 01:00:56.250484       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0224 01:00:56.253696       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 01:00:56.257289       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:00:57.085845       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 01:00:57.085901       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0224 01:00:57.096856       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0224 01:00:57.096917       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:00:57.102355       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 01:00:57.102374       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 01:00:57.130083       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 01:00:57.130343       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0224 01:00:57.323364       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 01:00:57.323679       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0224 01:00:57.326108       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 01:00:57.326151       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 01:00:57.360737       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 01:00:57.360785       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0224 01:01:00.327536       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-02-24 01:00:19 UTC, ends at Fri 2023-02-24 01:03:52 UTC. --
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955809    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skzdl\" (UniqueName: \"kubernetes.io/projected/55b36f8b-ffbe-49b3-99fc-aea074319cd0-kube-api-access-skzdl\") pod \"kindnet-cdxbx\" (UID: \"55b36f8b-ffbe-49b3-99fc-aea074319cd0\") " pod="kube-system/kindnet-cdxbx"
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955831    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55b36f8b-ffbe-49b3-99fc-aea074319cd0-lib-modules\") pod \"kindnet-cdxbx\" (UID: \"55b36f8b-ffbe-49b3-99fc-aea074319cd0\") " pod="kube-system/kindnet-cdxbx"
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955852    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed1ab279-4267-4c3c-a68d-a729dc29f05b-xtables-lock\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955873    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqb2\" (UniqueName: \"kubernetes.io/projected/ed1ab279-4267-4c3c-a68d-a729dc29f05b-kube-api-access-9fqb2\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955893    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed1ab279-4267-4c3c-a68d-a729dc29f05b-kube-proxy\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
	Feb 24 01:01:11 multinode-858631 kubelet[2154]: I0224 01:01:11.955929    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed1ab279-4267-4c3c-a68d-a729dc29f05b-lib-modules\") pod \"kube-proxy-vlrn6\" (UID: \"ed1ab279-4267-4c3c-a68d-a729dc29f05b\") " pod="kube-system/kube-proxy-vlrn6"
	Feb 24 01:01:13 multinode-858631 kubelet[2154]: I0224 01:01:13.245619    2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vlrn6" podStartSLOduration=2.24558033 pod.CreationTimestamp="2023-02-24 01:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:13.244406408 +0000 UTC m=+13.689776843" watchObservedRunningTime="2023-02-24 01:01:13.24558033 +0000 UTC m=+13.690950758"
	Feb 24 01:01:15 multinode-858631 kubelet[2154]: I0224 01:01:15.906680    2154 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34babfb41e50eb77c4bc905313e993b544b738a460a7a8d0fa527acf39b209b2"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.079980    2154 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.126467    2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-cdxbx" podStartSLOduration=-9.223372024728348e+09 pod.CreationTimestamp="2023-02-24 01:01:11 +0000 UTC" firstStartedPulling="2023-02-24 01:01:15.909493931 +0000 UTC m=+16.354864344" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:19.964031269 +0000 UTC m=+20.409401686" watchObservedRunningTime="2023-02-24 01:01:23.126427267 +0000 UTC m=+23.571797700"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.126708    2154 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.129041    2154 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: W0224 01:01:23.138561    2154 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-858631" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-858631' and this object
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: E0224 01:01:23.138666    2154 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:multinode-858631" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-858631' and this object
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140608    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgqx\" (UniqueName: \"kubernetes.io/projected/7ec578fe-05c4-4916-8db9-67ee112c136f-kube-api-access-tkgqx\") pod \"storage-provisioner\" (UID: \"7ec578fe-05c4-4916-8db9-67ee112c136f\") " pod="kube-system/storage-provisioner"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140721    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume\") pod \"coredns-787d4945fb-xhwx9\" (UID: \"9d799d4f-0d4b-468e-85ad-052c1735e35c\") " pod="kube-system/coredns-787d4945fb-xhwx9"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140796    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxjh\" (UniqueName: \"kubernetes.io/projected/9d799d4f-0d4b-468e-85ad-052c1735e35c-kube-api-access-crxjh\") pod \"coredns-787d4945fb-xhwx9\" (UID: \"9d799d4f-0d4b-468e-85ad-052c1735e35c\") " pod="kube-system/coredns-787d4945fb-xhwx9"
	Feb 24 01:01:23 multinode-858631 kubelet[2154]: I0224 01:01:23.140869    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7ec578fe-05c4-4916-8db9-67ee112c136f-tmp\") pod \"storage-provisioner\" (UID: \"7ec578fe-05c4-4916-8db9-67ee112c136f\") " pod="kube-system/storage-provisioner"
	Feb 24 01:01:24 multinode-858631 kubelet[2154]: E0224 01:01:24.243849    2154 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Feb 24 01:01:24 multinode-858631 kubelet[2154]: E0224 01:01:24.243996    2154 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume podName:9d799d4f-0d4b-468e-85ad-052c1735e35c nodeName:}" failed. No retries permitted until 2023-02-24 01:01:24.743960629 +0000 UTC m=+25.189331043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9d799d4f-0d4b-468e-85ad-052c1735e35c-config-volume") pod "coredns-787d4945fb-xhwx9" (UID: "9d799d4f-0d4b-468e-85ad-052c1735e35c") : failed to sync configmap cache: timed out waiting for the condition
	Feb 24 01:01:25 multinode-858631 kubelet[2154]: I0224 01:01:25.012411    2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.012379467 pod.CreationTimestamp="2023-02-24 01:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:25.012115165 +0000 UTC m=+25.457485598" watchObservedRunningTime="2023-02-24 01:01:25.012379467 +0000 UTC m=+25.457749899"
	Feb 24 01:01:25 multinode-858631 kubelet[2154]: I0224 01:01:25.496245    2154 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16cccc3389964c69e4d80432441960c0308efe13b6105bc10ffbb73ab2cb03ef"
	Feb 24 01:01:26 multinode-858631 kubelet[2154]: I0224 01:01:26.531245    2154 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xhwx9" podStartSLOduration=14.531056449 pod.CreationTimestamp="2023-02-24 01:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 01:01:26.526178496 +0000 UTC m=+26.971548929" watchObservedRunningTime="2023-02-24 01:01:26.531056449 +0000 UTC m=+26.976426902"
	Feb 24 01:02:19 multinode-858631 kubelet[2154]: I0224 01:02:19.978403    2154 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:02:20 multinode-858631 kubelet[2154]: I0224 01:02:20.018651    2154 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc6s\" (UniqueName: \"kubernetes.io/projected/f4b83e91-f308-405b-be73-10f422c3af35-kube-api-access-5gc6s\") pod \"busybox-6b86dd6d48-pmnbg\" (UID: \"f4b83e91-f308-405b-be73-10f422c3af35\") " pod="default/busybox-6b86dd6d48-pmnbg"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-858631 -n multinode-858631
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-858631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (20.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (77.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-966618 --alsologtostderr -v=1 --driver=kvm2 
E0224 01:23:53.748526   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:23:54.341568   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-966618 --alsologtostderr -v=1 --driver=kvm2 : (1m14.13126065s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-966618] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-966618 in cluster pause-966618
	* Updating the running kvm2 "pause-966618" VM ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-966618" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 01:23:51.906885   33267 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:23:51.907044   33267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:23:51.907056   33267 out.go:309] Setting ErrFile to fd 2...
	I0224 01:23:51.907062   33267 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:23:51.907188   33267 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:23:51.907761   33267 out.go:303] Setting JSON to false
	I0224 01:23:51.908666   33267 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":3981,"bootTime":1677197851,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 01:23:51.908722   33267 start.go:135] virtualization: kvm guest
	I0224 01:23:51.911208   33267 out.go:177] * [pause-966618] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 01:23:51.912611   33267 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 01:23:51.912615   33267 notify.go:220] Checking for updates...
	I0224 01:23:51.914094   33267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 01:23:51.915997   33267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:23:51.917803   33267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:23:51.919129   33267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 01:23:51.920426   33267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 01:23:51.922334   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:23:51.922862   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:23:51.922921   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:23:51.938690   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45073
	I0224 01:23:51.939076   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:23:51.939623   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:23:51.939644   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:23:51.939959   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:23:51.940142   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:51.940314   33267 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 01:23:51.940633   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:23:51.940666   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:23:51.956038   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0224 01:23:51.956472   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:23:51.957014   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:23:51.957036   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:23:51.957413   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:23:51.957614   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:51.996618   33267 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 01:23:51.997819   33267 start.go:296] selected driver: kvm2
	I0224 01:23:51.997834   33267 start.go:857] validating driver "kvm2" against &{Name:pause-966618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.26.1 ClusterName:pause-966618 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security
-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:23:51.997984   33267 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 01:23:51.998325   33267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:23:51.998409   33267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 01:23:52.013827   33267 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0224 01:23:52.014720   33267 cni.go:84] Creating CNI manager for ""
	I0224 01:23:52.014749   33267 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:23:52.014762   33267 start_flags.go:319] config:
	{Name:pause-966618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:pause-966618 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:23:52.014941   33267 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:23:52.016764   33267 out.go:177] * Starting control plane node pause-966618 in cluster pause-966618
	I0224 01:23:52.018011   33267 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:23:52.018044   33267 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 01:23:52.018057   33267 cache.go:57] Caching tarball of preloaded images
	I0224 01:23:52.018146   33267 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:23:52.018161   33267 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 01:23:52.018328   33267 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/config.json ...
	I0224 01:23:52.018538   33267 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:23:52.018562   33267 start.go:364] acquiring machines lock for pause-966618: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:23:52.018614   33267 start.go:368] acquired machines lock for "pause-966618" in 33.594µs
	I0224 01:23:52.018630   33267 start.go:96] Skipping create...Using existing machine configuration
	I0224 01:23:52.018638   33267 fix.go:55] fixHost starting: 
	I0224 01:23:52.019025   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:23:52.019069   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:23:52.032573   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I0224 01:23:52.033033   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:23:52.033593   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:23:52.033617   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:23:52.033919   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:23:52.034113   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:52.034260   33267 main.go:141] libmachine: (pause-966618) Calling .GetState
	I0224 01:23:52.035967   33267 fix.go:103] recreateIfNeeded on pause-966618: state=Running err=<nil>
	W0224 01:23:52.035983   33267 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 01:23:52.038320   33267 out.go:177] * Updating the running kvm2 "pause-966618" VM ...
	I0224 01:23:52.039665   33267 machine.go:88] provisioning docker machine ...
	I0224 01:23:52.039687   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:52.039864   33267 main.go:141] libmachine: (pause-966618) Calling .GetMachineName
	I0224 01:23:52.040022   33267 buildroot.go:166] provisioning hostname "pause-966618"
	I0224 01:23:52.040046   33267 main.go:141] libmachine: (pause-966618) Calling .GetMachineName
	I0224 01:23:52.040171   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.042626   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.043076   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.043101   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.043301   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.043460   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.043604   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.043746   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.043909   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:52.044315   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:52.044334   33267 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-966618 && echo "pause-966618" | sudo tee /etc/hostname
	I0224 01:23:52.183611   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-966618
	
	I0224 01:23:52.183646   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.186508   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.186847   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.186878   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.187022   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.187219   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.187410   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.187541   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.187720   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:52.188399   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:52.188437   33267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-966618' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-966618/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-966618' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:23:52.322341   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:23:52.322375   33267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:23:52.322396   33267 buildroot.go:174] setting up certificates
	I0224 01:23:52.322406   33267 provision.go:83] configureAuth start
	I0224 01:23:52.322419   33267 main.go:141] libmachine: (pause-966618) Calling .GetMachineName
	I0224 01:23:52.322680   33267 main.go:141] libmachine: (pause-966618) Calling .GetIP
	I0224 01:23:52.325462   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.325842   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.325882   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.326045   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.328252   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.328609   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.328638   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.328784   33267 provision.go:138] copyHostCerts
	I0224 01:23:52.328849   33267 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:23:52.328861   33267 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:23:52.328928   33267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:23:52.329043   33267 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:23:52.329054   33267 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:23:52.329083   33267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:23:52.329162   33267 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:23:52.329173   33267 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:23:52.329198   33267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:23:52.329265   33267 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.pause-966618 san=[192.168.50.59 192.168.50.59 localhost 127.0.0.1 minikube pause-966618]
	I0224 01:23:52.425398   33267 provision.go:172] copyRemoteCerts
	I0224 01:23:52.425460   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:23:52.425507   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.428381   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.428726   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.428761   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.428986   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.429175   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.429357   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.429533   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:23:52.537254   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:23:52.573481   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0224 01:23:52.600181   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 01:23:52.627734   33267 provision.go:86] duration metric: configureAuth took 305.314452ms
	I0224 01:23:52.627765   33267 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:23:52.627954   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:23:52.627977   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:52.628239   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.630741   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.631202   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.631233   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.631362   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.631537   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.631689   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.631840   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.632035   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:52.632604   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:52.632622   33267 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:23:52.759584   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:23:52.759614   33267 buildroot.go:70] root file system type: tmpfs
	I0224 01:23:52.759743   33267 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:23:52.759769   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.762398   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.762698   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.762723   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.762913   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.763112   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.763244   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.763338   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.763509   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:52.764029   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:52.764089   33267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:23:52.908382   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:23:52.908426   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:52.911156   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.911579   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:52.911614   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:52.911772   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:52.911961   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.912128   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:52.912276   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:52.912412   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:52.912853   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:52.912878   33267 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:23:53.049105   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:23:53.049130   33267 machine.go:91] provisioned docker machine in 1.009454336s
	I0224 01:23:53.049143   33267 start.go:300] post-start starting for "pause-966618" (driver="kvm2")
	I0224 01:23:53.049151   33267 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:23:53.049183   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:53.049538   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:23:53.049571   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:53.052483   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.052855   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:53.052891   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.053092   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:53.053299   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:53.053506   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:53.053667   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:23:53.151054   33267 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:23:53.155848   33267 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:23:53.155867   33267 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:23:53.155935   33267 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:23:53.156032   33267 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:23:53.156113   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:23:53.164785   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:23:53.189251   33267 start.go:303] post-start completed in 140.09549ms
	I0224 01:23:53.189269   33267 fix.go:57] fixHost completed within 1.170631207s
	I0224 01:23:53.189289   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:53.192094   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.192421   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:53.192445   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.192674   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:53.192844   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:53.193013   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:53.193165   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:53.193324   33267 main.go:141] libmachine: Using SSH client type: native
	I0224 01:23:53.193789   33267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.50.59 22 <nil> <nil>}
	I0224 01:23:53.193808   33267 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0224 01:23:53.319052   33267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677201833.313628461
	
	I0224 01:23:53.319076   33267 fix.go:207] guest clock: 1677201833.313628461
	I0224 01:23:53.319085   33267 fix.go:220] Guest: 2023-02-24 01:23:53.313628461 +0000 UTC Remote: 2023-02-24 01:23:53.189273062 +0000 UTC m=+1.337800230 (delta=124.355399ms)
	I0224 01:23:53.319106   33267 fix.go:191] guest clock delta is within tolerance: 124.355399ms
	I0224 01:23:53.319113   33267 start.go:83] releasing machines lock for "pause-966618", held for 1.300488248s
	I0224 01:23:53.319135   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:53.319380   33267 main.go:141] libmachine: (pause-966618) Calling .GetIP
	I0224 01:23:53.321992   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.322302   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:53.322333   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.322437   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:53.322919   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:53.323083   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:23:53.323165   33267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:23:53.323218   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:53.323266   33267 ssh_runner.go:195] Run: cat /version.json
	I0224 01:23:53.323291   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:23:53.325973   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.326221   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.326294   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:53.326318   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.326504   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:53.326671   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:53.326771   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:23:53.326804   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:53.326830   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:23:53.326955   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:23:53.327037   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:23:53.327201   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:23:53.327350   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:23:53.327459   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:23:53.414587   33267 ssh_runner.go:195] Run: systemctl --version
	I0224 01:23:53.439988   33267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 01:23:53.445845   33267 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:23:53.445913   33267 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:23:53.450187   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:23:53.460879   33267 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:23:53.484962   33267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:23:53.495551   33267 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0224 01:23:53.495576   33267 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:23:53.495682   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:23:53.530384   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:23:53.530405   33267 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:23:53.530417   33267 start.go:485] detecting cgroup driver to use...
	I0224 01:23:53.530540   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:23:53.552110   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:23:53.563856   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:23:53.574089   33267 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:23:53.574147   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:23:53.587249   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:23:53.600648   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:23:53.611543   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:23:53.623133   33267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:23:53.637330   33267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:23:53.650079   33267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:23:53.660039   33267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:23:53.671074   33267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:23:53.847984   33267 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:23:53.869210   33267 start.go:485] detecting cgroup driver to use...
	I0224 01:23:53.869298   33267 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:23:53.893160   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:23:53.909905   33267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:23:53.929405   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:23:53.942365   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:23:53.954556   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:23:53.972686   33267 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:23:54.126661   33267 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:23:54.290083   33267 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:23:54.290118   33267 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:23:54.308244   33267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:23:54.467650   33267 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:24:01.925857   33267 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.458167896s)
	I0224 01:24:01.925922   33267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:24:02.056477   33267 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:24:02.215069   33267 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:24:02.389053   33267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:24:02.546883   33267 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:24:02.574062   33267 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 01:24:02.574132   33267 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 01:24:02.585496   33267 start.go:553] Will wait 60s for crictl version
	I0224 01:24:02.585568   33267 ssh_runner.go:195] Run: which crictl
	I0224 01:24:02.590153   33267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 01:24:03.072360   33267 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0224 01:24:03.072412   33267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:24:03.244422   33267 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:24:03.342231   33267 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0224 01:24:03.342267   33267 main.go:141] libmachine: (pause-966618) Calling .GetIP
	I0224 01:24:03.344797   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:24:03.345280   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:24:03.345308   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:24:03.345517   33267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0224 01:24:03.351758   33267 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:24:03.351829   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:24:03.407474   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:24:03.407503   33267 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:24:03.407564   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:24:03.482618   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:24:03.482643   33267 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:24:03.482720   33267 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 01:24:03.552552   33267 cni.go:84] Creating CNI manager for ""
	I0224 01:24:03.552591   33267 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:24:03.552610   33267 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 01:24:03.552633   33267 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.59 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-966618 NodeName:pause-966618 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 01:24:03.552823   33267 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-966618"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 01:24:03.552995   33267 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-966618 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:pause-966618 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 01:24:03.553068   33267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 01:24:03.564717   33267 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 01:24:03.564849   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 01:24:03.578279   33267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
	I0224 01:24:03.612562   33267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 01:24:03.635705   33267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
	I0224 01:24:03.664616   33267 ssh_runner.go:195] Run: grep 192.168.50.59	control-plane.minikube.internal$ /etc/hosts
	I0224 01:24:03.668704   33267 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618 for IP: 192.168.50.59
	I0224 01:24:03.668733   33267 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:24:03.668901   33267 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
	I0224 01:24:03.668968   33267 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
	I0224 01:24:03.669056   33267 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.key
	I0224 01:24:03.669132   33267 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/apiserver.key.503789f7
	I0224 01:24:03.669177   33267 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/proxy-client.key
	I0224 01:24:03.669309   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
	W0224 01:24:03.669351   33267 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
	I0224 01:24:03.669361   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 01:24:03.669399   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
	I0224 01:24:03.669427   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
	I0224 01:24:03.669453   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
	I0224 01:24:03.669529   33267 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:24:03.670241   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 01:24:03.713538   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 01:24:03.755417   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 01:24:03.798581   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 01:24:03.849588   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 01:24:03.894542   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 01:24:03.959626   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 01:24:04.030257   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 01:24:04.086156   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
	I0224 01:24:04.160983   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 01:24:04.222457   33267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
	I0224 01:24:04.300585   33267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 01:24:04.351963   33267 ssh_runner.go:195] Run: openssl version
	I0224 01:24:04.360210   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
	I0224 01:24:04.373198   33267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
	I0224 01:24:04.380069   33267 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:24:04.380128   33267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
	I0224 01:24:04.387260   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
	I0224 01:24:04.397327   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
	I0224 01:24:04.412670   33267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
	I0224 01:24:04.418660   33267 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:24:04.418721   33267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
	I0224 01:24:04.428952   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 01:24:04.445057   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 01:24:04.467093   33267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:24:04.477130   33267 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:24:04.477193   33267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:24:04.486908   33267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 01:24:04.504572   33267 kubeadm.go:401] StartCluster: {Name:pause-966618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
26.1 ClusterName:pause-966618 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false port
ainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:24:04.504714   33267 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:24:04.567285   33267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 01:24:04.577774   33267 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0224 01:24:04.577801   33267 kubeadm.go:633] restartCluster start
	I0224 01:24:04.577854   33267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 01:24:04.591642   33267 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:04.592683   33267 kubeconfig.go:92] found "pause-966618" server: "https://192.168.50.59:8443"
	I0224 01:24:04.594350   33267 kapi.go:59] client config for pause-966618: &rest.Config{Host:"https://192.168.50.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:24:04.595483   33267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 01:24:04.607356   33267 api_server.go:165] Checking apiserver status ...
	I0224 01:24:04.607409   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 01:24:04.621640   33267 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:05.122354   33267 api_server.go:165] Checking apiserver status ...
	I0224 01:24:05.122435   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:24:05.143612   33267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5237/cgroup
	I0224 01:24:05.168131   33267 api_server.go:181] apiserver freezer: "4:freezer:/kubepods/burstable/pod3c4cb414ea174d52a7771cd825a60da4/f3b13c7b26554647427d96da4778b97778089ad3de9a095b4a4ac6e94420710a"
	I0224 01:24:05.168203   33267 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3c4cb414ea174d52a7771cd825a60da4/f3b13c7b26554647427d96da4778b97778089ad3de9a095b4a4ac6e94420710a/freezer.state
	I0224 01:24:05.203581   33267 api_server.go:203] freezer state: "THAWED"
	I0224 01:24:05.203612   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:05.204074   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:05.204142   33267 retry.go:31] will retry after 267.81454ms: state is "Stopped"
	I0224 01:24:05.472589   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:10.473787   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0224 01:24:10.473840   33267 retry.go:31] will retry after 243.862331ms: state is "Stopped"
	I0224 01:24:10.718360   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:15.719342   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0224 01:24:15.719394   33267 retry.go:31] will retry after 422.470511ms: state is "Stopped"
	I0224 01:24:16.142605   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:21.145567   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0224 01:24:21.145609   33267 api_server.go:165] Checking apiserver status ...
	I0224 01:24:21.145654   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:24:21.157873   33267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5237/cgroup
	I0224 01:24:21.165931   33267 api_server.go:181] apiserver freezer: "4:freezer:/kubepods/burstable/pod3c4cb414ea174d52a7771cd825a60da4/f3b13c7b26554647427d96da4778b97778089ad3de9a095b4a4ac6e94420710a"
	I0224 01:24:21.165990   33267 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3c4cb414ea174d52a7771cd825a60da4/f3b13c7b26554647427d96da4778b97778089ad3de9a095b4a4ac6e94420710a/freezer.state
	I0224 01:24:21.174063   33267 api_server.go:203] freezer state: "THAWED"
	I0224 01:24:21.174086   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:25.956026   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": read tcp 192.168.50.1:39626->192.168.50.59:8443: read: connection reset by peer
	I0224 01:24:25.956076   33267 retry.go:31] will retry after 311.602757ms: state is "Stopped"
	I0224 01:24:26.268418   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:26.269104   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:26.269148   33267 retry.go:31] will retry after 290.90337ms: state is "Stopped"
	I0224 01:24:26.560641   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:26.561312   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:26.561360   33267 retry.go:31] will retry after 342.910461ms: state is "Stopped"
	I0224 01:24:26.904879   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:26.905458   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:26.905522   33267 retry.go:31] will retry after 415.045448ms: state is "Stopped"
	I0224 01:24:27.321435   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:27.322031   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:27.322066   33267 retry.go:31] will retry after 591.528268ms: state is "Stopped"
	I0224 01:24:27.913739   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:27.914326   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:27.914365   33267 retry.go:31] will retry after 586.050958ms: state is "Stopped"
	I0224 01:24:28.500566   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:28.501054   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:28.501085   33267 retry.go:31] will retry after 1.039845677s: state is "Stopped"
	I0224 01:24:29.541224   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:29.541923   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:29.541966   33267 retry.go:31] will retry after 946.928043ms: state is "Stopped"
	I0224 01:24:30.489037   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:30.489700   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:30.489741   33267 retry.go:31] will retry after 1.630537817s: state is "Stopped"
	I0224 01:24:32.120974   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:32.121579   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:32.121616   33267 retry.go:31] will retry after 1.661532498s: state is "Stopped"
	I0224 01:24:33.784375   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:33.785016   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:33.785071   33267 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 01:24:33.785080   33267 kubeadm.go:1120] stopping kube-system containers ...
	I0224 01:24:33.785148   33267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:24:33.815898   33267 docker.go:456] Stopping containers: [4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d]
	I0224 01:24:33.815973   33267 ssh_runner.go:195] Run: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d
	I0224 01:24:39.082069   33267 ssh_runner.go:235] Completed: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d: (5.266065396s)
	I0224 01:24:39.082132   33267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 01:24:39.126516   33267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 01:24:39.136579   33267 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 24 01:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Feb 24 01:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 24 01:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Feb 24 01:23 /etc/kubernetes/scheduler.conf
	
	I0224 01:24:39.136646   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 01:24:39.145075   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 01:24:39.153171   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.161688   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.161744   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.169623   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 01:24:39.177326   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.177391   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 01:24:39.185537   33267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194208   33267 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194230   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:39.320624   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.517893   33267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.197229724s)
	I0224 01:24:40.517927   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.735504   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.833361   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.950707   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:24:40.950773   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:24:40.964974   33267 api_server.go:71] duration metric: took 14.2683ms to wait for apiserver process to appear ...
	I0224 01:24:40.965001   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:24:40.965013   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:45.376650   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 01:24:45.376682   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 01:24:45.876934   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:45.882491   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:45.882516   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.376861   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.382503   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:46.382529   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.877091   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.882591   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:24:46.891863   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:24:46.891882   33267 api_server.go:130] duration metric: took 5.926875892s to wait for apiserver health ...
	I0224 01:24:46.891891   33267 cni.go:84] Creating CNI manager for ""
	I0224 01:24:46.891900   33267 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:24:46.893673   33267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 01:24:46.894663   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 01:24:46.905495   33267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 01:24:46.921751   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:24:46.934126   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:24:46.934148   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:24:46.934153   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:24:46.934157   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:24:46.934162   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:24:46.934168   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 01:24:46.934172   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:24:46.934177   33267 system_pods.go:74] duration metric: took 12.408948ms to wait for pod list to return data ...
	I0224 01:24:46.934186   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:24:46.937710   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:24:46.937732   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:24:46.937740   33267 node_conditions.go:105] duration metric: took 3.549963ms to run NodePressure ...
	I0224 01:24:46.937755   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:47.235472   33267 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0224 01:24:47.240426   33267 retry.go:31] will retry after 218.964762ms: kubelet not initialised
	I0224 01:24:47.465916   33267 retry.go:31] will retry after 301.879234ms: kubelet not initialised
	I0224 01:24:47.774334   33267 kubeadm.go:784] kubelet initialised
	I0224 01:24:47.774357   33267 kubeadm.go:785] duration metric: took 538.862246ms waiting for restarted kubelet to initialise ...
	I0224 01:24:47.774363   33267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:24:47.779055   33267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787041   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:24:47.787053   33267 pod_ready.go:81] duration metric: took 7.980412ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787060   33267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:49.808529   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:51.813126   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:54.309888   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:56.309957   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:58.784183   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:25:00.810267   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:25:02.309074   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.309106   33267 pod_ready.go:81] duration metric: took 14.522040109s waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.309120   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315314   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.315341   33267 pod_ready.go:81] duration metric: took 6.212713ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315353   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319620   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.319637   33267 pod_ready.go:81] duration metric: took 4.27604ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319647   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326775   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.326829   33267 pod_ready.go:81] duration metric: took 7.173792ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326852   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333414   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.333431   33267 pod_ready.go:81] duration metric: took 6.567454ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333439   33267 pod_ready.go:38] duration metric: took 14.559067346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.333460   33267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 01:25:02.355296   33267 ops.go:34] apiserver oom_adj: -16
	I0224 01:25:02.355314   33267 kubeadm.go:637] restartCluster took 57.777507228s
	I0224 01:25:02.355326   33267 kubeadm.go:403] StartCluster complete in 57.85076012s
	I0224 01:25:02.355347   33267 settings.go:142] acquiring lock: {Name:mk174257a2297336a9e6f80080faa7ef819759a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.355426   33267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:25:02.356623   33267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/kubeconfig: {Name:mk7a14c2c6ccf91ba70e9a5ad74574ac5676cf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.356886   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 01:25:02.357022   33267 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0224 01:25:02.357120   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.357180   33267 cache.go:107] acquiring lock: {Name:mk652b3b8459ff39d515b47d5e4228842d267921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:25:02.359382   33267 out.go:177] * Enabled addons: 
	I0224 01:25:02.357243   33267 cache.go:115] /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0224 01:25:02.357743   33267 kapi.go:59] client config for pause-966618: &rest.Config{Host:"https://192.168.50.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:25:02.360819   33267 addons.go:492] enable addons completed in 3.794353ms: enabled=[]
	I0224 01:25:02.360843   33267 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.665817ms
	I0224 01:25:02.360860   33267 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0224 01:25:02.360870   33267 cache.go:87] Successfully saved all images to host disk.
	I0224 01:25:02.361076   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.361456   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.361507   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.364964   33267 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-966618" context rescaled to 1 replicas
	I0224 01:25:02.365000   33267 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 01:25:02.367473   33267 out.go:177] * Verifying Kubernetes components...
	I0224 01:25:02.368794   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:02.382200   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0224 01:25:02.382731   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.383282   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.383305   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.383650   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.383801   33267 main.go:141] libmachine: (pause-966618) Calling .GetState
	I0224 01:25:02.386097   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.386152   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.407245   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0224 01:25:02.409582   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.410204   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.410229   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.410659   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.410858   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:25:02.411069   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.411100   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:25:02.415890   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.415936   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:25:02.415959   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.416047   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:25:02.416230   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:25:02.416416   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:25:02.416697   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:25:02.581908   33267 node_ready.go:35] waiting up to 6m0s for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.582172   33267 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0224 01:25:02.585550   33267 node_ready.go:49] node "pause-966618" has status "Ready":"True"
	I0224 01:25:02.585574   33267 node_ready.go:38] duration metric: took 3.634014ms waiting for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.585585   33267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.601775   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.601800   33267 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:25:02.601807   33267 cache_images.go:262] succeeded pushing to: pause-966618
	I0224 01:25:02.601812   33267 cache_images.go:263] failed pushing to: 
	I0224 01:25:02.601832   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.601843   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602141   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602162   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.602171   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.602180   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602914   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602930   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.711320   33267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117011   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.117050   33267 pod_ready.go:81] duration metric: took 405.70699ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117065   33267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508668   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.508692   33267 pod_ready.go:81] duration metric: took 391.619335ms waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508705   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907114   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.907137   33267 pod_ready.go:81] duration metric: took 398.424421ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907149   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306213   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.306238   33267 pod_ready.go:81] duration metric: took 399.079189ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306250   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706570   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.706595   33267 pod_ready.go:81] duration metric: took 400.337777ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706608   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107315   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:05.107347   33267 pod_ready.go:81] duration metric: took 400.730489ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107364   33267 pod_ready.go:38] duration metric: took 2.521766254s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:05.107389   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:25:05.107436   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:25:05.128123   33267 api_server.go:71] duration metric: took 2.763089316s to wait for apiserver process to appear ...
	I0224 01:25:05.128163   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:25:05.128177   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:25:05.135652   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:25:05.136519   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:25:05.136537   33267 api_server.go:130] duration metric: took 8.36697ms to wait for apiserver health ...
	I0224 01:25:05.136546   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:25:05.309967   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:25:05.310003   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.310011   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.310018   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.310024   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.310030   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.310038   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.310045   33267 system_pods.go:74] duration metric: took 173.493465ms to wait for pod list to return data ...
	I0224 01:25:05.310057   33267 default_sa.go:34] waiting for default service account to be created ...
	I0224 01:25:05.506994   33267 default_sa.go:45] found service account: "default"
	I0224 01:25:05.507025   33267 default_sa.go:55] duration metric: took 196.958905ms for default service account to be created ...
	I0224 01:25:05.507048   33267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 01:25:05.709913   33267 system_pods.go:86] 6 kube-system pods found
	I0224 01:25:05.709939   33267 system_pods.go:89] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.709946   33267 system_pods.go:89] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.709953   33267 system_pods.go:89] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.709960   33267 system_pods.go:89] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.709966   33267 system_pods.go:89] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.709972   33267 system_pods.go:89] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.709981   33267 system_pods.go:126] duration metric: took 202.927244ms to wait for k8s-apps to be running ...
	I0224 01:25:05.709993   33267 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 01:25:05.710044   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:05.724049   33267 system_svc.go:56] duration metric: took 14.047173ms WaitForService to wait for kubelet.
	I0224 01:25:05.724072   33267 kubeadm.go:578] duration metric: took 3.359049103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 01:25:05.724093   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:25:05.908529   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:25:05.908554   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:25:05.908564   33267 node_conditions.go:105] duration metric: took 184.464952ms to run NodePressure ...
	I0224 01:25:05.908574   33267 start.go:228] waiting for startup goroutines ...
	I0224 01:25:05.908580   33267 start.go:233] waiting for cluster config update ...
	I0224 01:25:05.908587   33267 start.go:242] writing updated cluster config ...
	I0224 01:25:05.908913   33267 ssh_runner.go:195] Run: rm -f paused
	I0224 01:25:05.962958   33267 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 01:25:05.966906   33267 out.go:177] * Done! kubectl is now configured to use "pause-966618" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-966618 -n pause-966618
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-966618 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-966618 logs -n 25: (1.145169388s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                      Args                      |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC |                     |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:21 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-676643                      | running-upgrade-676643    | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:20 UTC |
	| start   | -p force-systemd-env-457559                    | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:22 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-273695                      | stopped-upgrade-273695    | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	| start   | -p cert-expiration-843515                      | cert-expiration-843515    | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:22 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| cache   | gvisor-694714 cache add                        | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	|         | gcr.io/k8s-minikube/gvisor-addon:2             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	| start   | -p docker-flags-724706                         | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:23 UTC |
	|         | --cache-images=false                           |                           |         |         |                     |                     |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=false                                   |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                           |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                           |                           |         |         |                     |                     |
	|         | --docker-opt=debug                             |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| addons  | gvisor-694714 addons enable                    | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:22 UTC |
	|         | gvisor                                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-457559                       | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:22 UTC |
	|         | ssh docker info --format                       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-457559                    | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:22 UTC |
	| start   | -p pause-966618 --memory=2048                  | pause-966618              | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:23 UTC |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                       |                           |         |         |                     |                     |
	| stop    | -p gvisor-694714                               | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:24 UTC |
	| ssh     | docker-flags-724706 ssh                        | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=Environment                         |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-724706 ssh                        | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=ExecStart                           |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| delete  | -p docker-flags-724706                         | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	| start   | -p cert-options-398057                         | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:24 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p pause-966618                                | pause-966618              | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:25 UTC |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | cert-options-398057 ssh                        | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	|         | openssl x509 -text -noout -in                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-398057 -- sudo                 | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	|         | cat /etc/kubernetes/admin.conf                 |                           |         |         |                     |                     |
	| delete  | -p cert-options-398057                         | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	| start   | -p NoKubernetes-394034                         | NoKubernetes-394034       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --no-kubernetes                                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-394034                         | NoKubernetes-394034       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p gvisor-694714 --memory=2200                 | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --container-runtime=containerd --docker-opt    |                           |         |         |                     |                     |
	|         | containerd=/var/run/containerd/containerd.sock |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 01:24:34
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 01:24:34.466247   33676 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:24:34.466376   33676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:24:34.466380   33676 out.go:309] Setting ErrFile to fd 2...
	I0224 01:24:34.466385   33676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:24:34.466525   33676 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:24:34.467084   33676 out.go:303] Setting JSON to false
	I0224 01:24:34.467946   33676 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4024,"bootTime":1677197851,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 01:24:34.467993   33676 start.go:135] virtualization: kvm guest
	I0224 01:24:34.470329   33676 out.go:177] * [gvisor-694714] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 01:24:34.471671   33676 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 01:24:34.471671   33676 notify.go:220] Checking for updates...
	I0224 01:24:34.473831   33676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 01:24:34.475158   33676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:24:34.476343   33676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:24:34.477579   33676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 01:24:34.478854   33676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 01:24:34.480493   33676 config.go:182] Loaded profile config "gvisor-694714": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0224 01:24:34.481000   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:34.481065   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:34.495604   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0224 01:24:34.496309   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:34.496902   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:34.496919   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:34.497435   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:34.497678   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:34.497879   33676 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 01:24:34.498279   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:34.498308   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:34.512385   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0224 01:24:34.512720   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:34.513148   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:34.513167   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:34.513424   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:34.513601   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:34.547588   33676 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 01:24:34.548732   33676 start.go:296] selected driver: kvm2
	I0224 01:24:34.548736   33676 start.go:857] validating driver "kvm2" against &{Name:gvisor-694714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUse
r:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:gvisor-694714 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true gvisor:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:24:34.548841   33676 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 01:24:34.549376   33676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:24:34.549437   33676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 01:24:34.562930   33676 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0224 01:24:34.563200   33676 cni.go:84] Creating CNI manager for ""
	I0224 01:24:34.563212   33676 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0224 01:24:34.563219   33676 start_flags.go:319] config:
	{Name:gvisor-694714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterN
ame:gvisor-694714 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true gvisor:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:24:34.563314   33676 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:24:34.564989   33676 out.go:177] * Starting control plane node gvisor-694714 in cluster gvisor-694714
	I0224 01:24:32.120974   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:32.121579   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:32.121616   33267 retry.go:31] will retry after 1.661532498s: state is "Stopped"
	I0224 01:24:33.784375   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:33.785016   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:33.785071   33267 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 01:24:33.785080   33267 kubeadm.go:1120] stopping kube-system containers ...
	I0224 01:24:33.785148   33267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:24:33.815898   33267 docker.go:456] Stopping containers: [4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d]
	I0224 01:24:33.815973   33267 ssh_runner.go:195] Run: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d
	I0224 01:24:32.744696   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:32.745177   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:32.745192   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:32.745145   33563 retry.go:31] will retry after 2.669961808s: waiting for machine to come up
	I0224 01:24:35.418769   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:35.419173   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:35.419196   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:35.419125   33563 retry.go:31] will retry after 3.056903471s: waiting for machine to come up
	I0224 01:24:34.566141   33676 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime containerd
	I0224 01:24:34.566195   33676 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4
	I0224 01:24:34.566210   33676 cache.go:57] Caching tarball of preloaded images
	I0224 01:24:34.566308   33676 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:24:34.566316   33676 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on containerd
	I0224 01:24:34.566468   33676 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/config.json ...
	I0224 01:24:34.566679   33676 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:24:34.566710   33676 start.go:364] acquiring machines lock for gvisor-694714: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:24:39.082069   33267 ssh_runner.go:235] Completed: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d: (5.266065396s)
	I0224 01:24:39.082132   33267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 01:24:39.126516   33267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 01:24:39.136579   33267 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 24 01:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Feb 24 01:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 24 01:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Feb 24 01:23 /etc/kubernetes/scheduler.conf
	
	I0224 01:24:39.136646   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 01:24:39.145075   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 01:24:39.153171   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.161688   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.161744   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.169623   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 01:24:39.177326   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.177391   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 01:24:39.185537   33267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194208   33267 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194230   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:39.320624   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.517893   33267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.197229724s)
	I0224 01:24:40.517927   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.735504   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.833361   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.950707   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:24:40.950773   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:24:40.964974   33267 api_server.go:71] duration metric: took 14.2683ms to wait for apiserver process to appear ...
	I0224 01:24:40.965001   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:24:40.965013   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:38.477199   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:38.477593   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:38.477616   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:38.477544   33563 retry.go:31] will retry after 3.07394937s: waiting for machine to come up
	I0224 01:24:41.554657   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:41.555172   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:41.555188   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:41.555138   33563 retry.go:31] will retry after 4.525311684s: waiting for machine to come up
	I0224 01:24:45.376650   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 01:24:45.376682   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 01:24:45.876934   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:45.882491   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:45.882516   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.376861   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.382503   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:46.382529   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.877091   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.882591   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:24:46.891863   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:24:46.891882   33267 api_server.go:130] duration metric: took 5.926875892s to wait for apiserver health ...
	I0224 01:24:46.891891   33267 cni.go:84] Creating CNI manager for ""
	I0224 01:24:46.891900   33267 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:24:46.893673   33267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 01:24:46.894663   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 01:24:46.905495   33267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 01:24:46.084008   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.084556   33541 main.go:141] libmachine: (NoKubernetes-394034) Found IP for machine: 192.168.61.116
	I0224 01:24:46.084573   33541 main.go:141] libmachine: (NoKubernetes-394034) Reserving static IP address...
	I0224 01:24:46.084588   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has current primary IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.085009   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-394034", mac: "52:54:00:9f:50:8f", ip: "192.168.61.116"} in network mk-NoKubernetes-394034
	I0224 01:24:46.160376   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Getting to WaitForSSH function...
	I0224 01:24:46.160398   33541 main.go:141] libmachine: (NoKubernetes-394034) Reserved static IP address: 192.168.61.116
	I0224 01:24:46.160411   33541 main.go:141] libmachine: (NoKubernetes-394034) Waiting for SSH to be available...
	I0224 01:24:46.163070   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.163328   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034
	I0224 01:24:46.163344   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find defined IP address of network mk-NoKubernetes-394034 interface with MAC address 52:54:00:9f:50:8f
	I0224 01:24:46.163567   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH client type: external
	I0224 01:24:46.163589   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa (-rw-------)
	I0224 01:24:46.163615   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:24:46.163625   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | About to run SSH command:
	I0224 01:24:46.163635   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | exit 0
	I0224 01:24:46.167456   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | SSH cmd err, output: exit status 255: 
	I0224 01:24:46.167472   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0224 01:24:46.167483   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | command : exit 0
	I0224 01:24:46.167498   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | err     : exit status 255
	I0224 01:24:46.167510   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | output  : 
	I0224 01:24:51.354223   33676 start.go:368] acquired machines lock for "gvisor-694714" in 16.787477726s
	I0224 01:24:51.354257   33676 start.go:96] Skipping create...Using existing machine configuration
	I0224 01:24:51.354262   33676 fix.go:55] fixHost starting: 
	I0224 01:24:51.354651   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:51.354692   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:51.373159   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0224 01:24:51.373629   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:51.374176   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:51.374199   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:51.374548   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:51.374731   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:51.374905   33676 main.go:141] libmachine: (gvisor-694714) Calling .GetState
	I0224 01:24:51.376294   33676 fix.go:103] recreateIfNeeded on gvisor-694714: state=Stopped err=<nil>
	I0224 01:24:51.376325   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	W0224 01:24:51.376485   33676 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 01:24:51.378584   33676 out.go:177] * Restarting existing kvm2 VM for "gvisor-694714" ...
	I0224 01:24:46.921751   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:24:46.934126   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:24:46.934148   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:24:46.934153   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:24:46.934157   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:24:46.934162   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:24:46.934168   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 01:24:46.934172   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:24:46.934177   33267 system_pods.go:74] duration metric: took 12.408948ms to wait for pod list to return data ...
	I0224 01:24:46.934186   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:24:46.937710   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:24:46.937732   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:24:46.937740   33267 node_conditions.go:105] duration metric: took 3.549963ms to run NodePressure ...
	I0224 01:24:46.937755   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:47.235472   33267 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0224 01:24:47.240426   33267 retry.go:31] will retry after 218.964762ms: kubelet not initialised
	I0224 01:24:47.465916   33267 retry.go:31] will retry after 301.879234ms: kubelet not initialised
	I0224 01:24:47.774334   33267 kubeadm.go:784] kubelet initialised
	I0224 01:24:47.774357   33267 kubeadm.go:785] duration metric: took 538.862246ms waiting for restarted kubelet to initialise ...
	I0224 01:24:47.774363   33267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:24:47.779055   33267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787041   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:24:47.787053   33267 pod_ready.go:81] duration metric: took 7.980412ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787060   33267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:49.808529   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:51.813126   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:49.167782   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Getting to WaitForSSH function...
	I0224 01:24:49.170422   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.170837   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.170854   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.170974   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH client type: external
	I0224 01:24:49.171011   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa (-rw-------)
	I0224 01:24:49.171032   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:24:49.171037   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | About to run SSH command:
	I0224 01:24:49.171044   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | exit 0
	I0224 01:24:49.257572   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | SSH cmd err, output: <nil>: 
	I0224 01:24:49.257865   33541 main.go:141] libmachine: (NoKubernetes-394034) KVM machine creation complete!
	I0224 01:24:49.258204   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetConfigRaw
	I0224 01:24:49.258689   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:49.258935   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:49.259088   33541 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 01:24:49.259100   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetState
	I0224 01:24:49.260539   33541 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 01:24:49.260549   33541 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 01:24:49.260556   33541 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 01:24:49.260565   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.262852   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.263193   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.263216   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.263364   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.263528   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.263681   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.263813   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.263948   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.264352   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.264358   33541 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 01:24:49.376440   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:24:49.376455   33541 main.go:141] libmachine: Detecting the provisioner...
	I0224 01:24:49.376461   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.379231   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.379560   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.379580   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.379696   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.379868   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.379969   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.380090   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.380235   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.380613   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.380621   33541 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 01:24:49.494239   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g41e8300-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0224 01:24:49.494291   33541 main.go:141] libmachine: found compatible host: buildroot
	I0224 01:24:49.494296   33541 main.go:141] libmachine: Provisioning with buildroot...
	I0224 01:24:49.494303   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.494526   33541 buildroot.go:166] provisioning hostname "NoKubernetes-394034"
	I0224 01:24:49.494561   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.494731   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.497224   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.497585   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.497605   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.497753   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.497892   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.498040   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.498175   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.498313   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.498777   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.498785   33541 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-394034 && echo "NoKubernetes-394034" | sudo tee /etc/hostname
	I0224 01:24:49.623459   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-394034
	
	I0224 01:24:49.623473   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.626153   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.626480   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.626496   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.626651   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.626808   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.626952   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.627110   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.627252   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.627713   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.627725   33541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-394034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-394034/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-394034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:24:49.753972   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:24:49.753986   33541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:24:49.753998   33541 buildroot.go:174] setting up certificates
	I0224 01:24:49.754006   33541 provision.go:83] configureAuth start
	I0224 01:24:49.754016   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.754296   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:49.756940   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.757313   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.757338   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.757523   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.759863   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.760217   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.760240   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.760367   33541 provision.go:138] copyHostCerts
	I0224 01:24:49.760425   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:24:49.760431   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:24:49.760507   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:24:49.760626   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:24:49.760632   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:24:49.760669   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:24:49.760727   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:24:49.760730   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:24:49.760750   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:24:49.760796   33541 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-394034 san=[192.168.61.116 192.168.61.116 localhost 127.0.0.1 minikube NoKubernetes-394034]
	I0224 01:24:49.919118   33541 provision.go:172] copyRemoteCerts
	I0224 01:24:49.919175   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:24:49.919203   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.922478   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.922840   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.922865   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.923121   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.923314   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.923500   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.923662   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:50.015124   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0224 01:24:50.040165   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 01:24:50.062596   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:24:50.085045   33541 provision.go:86] duration metric: configureAuth took 331.027157ms
	I0224 01:24:50.085062   33541 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:24:50.085263   33541 config.go:182] Loaded profile config "NoKubernetes-394034": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:24:50.085284   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:50.085580   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.088430   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.088780   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.088799   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.088944   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.089127   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.089322   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.089502   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.089672   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.090054   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.090061   33541 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:24:50.210906   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:24:50.210917   33541 buildroot.go:70] root file system type: tmpfs
	I0224 01:24:50.211010   33541 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:24:50.211029   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.213433   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.213772   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.213791   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.213956   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.214135   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.214261   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.214406   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.214526   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.214910   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.214960   33541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:24:50.343633   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:24:50.343650   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.346522   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.346984   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.347006   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.347191   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.347370   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.347597   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.347731   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.347925   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.348454   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.348466   33541 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:24:51.087401   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0224 01:24:51.087415   33541 main.go:141] libmachine: Checking connection to Docker...
	I0224 01:24:51.087425   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetURL
	I0224 01:24:51.088669   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using libvirt version 6000000
	I0224 01:24:51.091014   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.091429   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.091453   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.091678   33541 main.go:141] libmachine: Docker is up and running!
	I0224 01:24:51.091690   33541 main.go:141] libmachine: Reticulating splines...
	I0224 01:24:51.091696   33541 client.go:171] LocalClient.Create took 28.315438502s
	I0224 01:24:51.091726   33541 start.go:167] duration metric: libmachine.API.Create for "NoKubernetes-394034" took 28.315490526s
	I0224 01:24:51.091733   33541 start.go:300] post-start starting for "NoKubernetes-394034" (driver="kvm2")
	I0224 01:24:51.091740   33541 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:24:51.091757   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.091996   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:24:51.092023   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.094384   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.094743   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.094762   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.094893   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.095091   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.095232   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.095377   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.190462   33541 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:24:51.195373   33541 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:24:51.195388   33541 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:24:51.195447   33541 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:24:51.195512   33541 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:24:51.195582   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:24:51.203334   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:24:51.232155   33541 start.go:303] post-start completed in 140.408799ms
	I0224 01:24:51.232193   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetConfigRaw
	I0224 01:24:51.232796   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:51.235822   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.236191   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.236214   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.236454   33541 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/config.json ...
	I0224 01:24:51.236620   33541 start.go:128] duration metric: createHost completed in 28.478153853s
	I0224 01:24:51.236635   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.238989   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.239301   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.239323   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.239465   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.239681   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.239845   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.239985   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.240140   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:51.240512   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:51.240518   33541 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0224 01:24:51.354094   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677201891.343518188
	
	I0224 01:24:51.354106   33541 fix.go:207] guest clock: 1677201891.343518188
	I0224 01:24:51.354113   33541 fix.go:220] Guest: 2023-02-24 01:24:51.343518188 +0000 UTC Remote: 2023-02-24 01:24:51.236625205 +0000 UTC m=+28.585897458 (delta=106.892983ms)
	I0224 01:24:51.354127   33541 fix.go:191] guest clock delta is within tolerance: 106.892983ms
	I0224 01:24:51.354130   33541 start.go:83] releasing machines lock for "NoKubernetes-394034", held for 28.595744098s
	I0224 01:24:51.354152   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.354418   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:51.357107   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.357439   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.357484   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.357597   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358089   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358275   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358361   33541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:24:51.358393   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.358457   33541 ssh_runner.go:195] Run: cat /version.json
	I0224 01:24:51.358467   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.360925   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361237   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.361259   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361330   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361389   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.361558   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.361664   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.361783   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.361799   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.361823   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361950   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.362074   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.362166   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.362242   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.466956   33541 ssh_runner.go:195] Run: systemctl --version
	I0224 01:24:51.472400   33541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 01:24:51.477784   33541 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:24:51.477819   33541 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:24:51.481497   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:24:51.491458   33541 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:24:51.507638   33541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:24:51.524902   33541 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 01:24:51.524913   33541 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:24:51.525001   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:24:51.554887   33541 docker.go:630] Got preloaded images: 
	I0224 01:24:51.554901   33541 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0224 01:24:51.554944   33541 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:24:51.564967   33541 ssh_runner.go:195] Run: which lz4
	I0224 01:24:51.568643   33541 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0224 01:24:51.572602   33541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:24:51.572623   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0224 01:24:51.379791   33676 main.go:141] libmachine: (gvisor-694714) Calling .Start
	I0224 01:24:51.379930   33676 main.go:141] libmachine: (gvisor-694714) Ensuring networks are active...
	I0224 01:24:51.380621   33676 main.go:141] libmachine: (gvisor-694714) Ensuring network default is active
	I0224 01:24:51.380975   33676 main.go:141] libmachine: (gvisor-694714) Ensuring network mk-gvisor-694714 is active
	I0224 01:24:51.381306   33676 main.go:141] libmachine: (gvisor-694714) Getting domain xml...
	I0224 01:24:51.382028   33676 main.go:141] libmachine: (gvisor-694714) Creating domain...
	I0224 01:24:52.991884   33676 main.go:141] libmachine: (gvisor-694714) Waiting to get IP...
	I0224 01:24:52.992951   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:52.993372   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:52.993497   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:52.993356   33749 retry.go:31] will retry after 246.858858ms: waiting for machine to come up
	I0224 01:24:53.242127   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.242684   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.242708   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.242622   33749 retry.go:31] will retry after 317.124816ms: waiting for machine to come up
	I0224 01:24:53.561186   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.561653   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.561690   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.561625   33749 retry.go:31] will retry after 351.190965ms: waiting for machine to come up
	I0224 01:24:53.914119   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.914752   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.914777   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.914685   33749 retry.go:31] will retry after 404.394392ms: waiting for machine to come up
	I0224 01:24:54.320168   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:54.320688   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:54.320711   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:54.320647   33749 retry.go:31] will retry after 695.572509ms: waiting for machine to come up
	I0224 01:24:54.309888   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:56.309957   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:53.303974   33541 docker.go:594] Took 1.735368 seconds to copy over tarball
	I0224 01:24:53.304030   33541 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 01:24:56.117267   33541 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.81321656s)
	I0224 01:24:56.117281   33541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 01:24:56.157910   33541 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:24:56.168959   33541 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0224 01:24:56.186568   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:24:56.292108   33541 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:24:55.018242   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:55.018802   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:55.018827   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:55.018728   33749 retry.go:31] will retry after 583.578143ms: waiting for machine to come up
	I0224 01:24:55.603573   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:55.603994   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:55.604017   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:55.603931   33749 retry.go:31] will retry after 1.080429432s: waiting for machine to come up
	I0224 01:24:56.685975   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:56.686364   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:56.686387   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:56.686319   33749 retry.go:31] will retry after 1.235678304s: waiting for machine to come up
	I0224 01:24:57.923073   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:57.923491   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:57.923521   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:57.923421   33749 retry.go:31] will retry after 1.86022193s: waiting for machine to come up
	I0224 01:24:58.784183   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:25:00.810267   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:59.443731   33541 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.151599008s)
	I0224 01:24:59.443757   33541 start.go:485] detecting cgroup driver to use...
	I0224 01:24:59.443855   33541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:24:59.460466   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:24:59.470976   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:24:59.480789   33541 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:24:59.480833   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:24:59.490754   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:24:59.499566   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:24:59.508496   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:24:59.517278   33541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:24:59.526736   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:24:59.535557   33541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:24:59.544066   33541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:24:59.552239   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:24:59.659255   33541 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:24:59.678215   33541 start.go:485] detecting cgroup driver to use...
	I0224 01:24:59.678279   33541 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:24:59.691790   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:24:59.703791   33541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:24:59.719101   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:24:59.732066   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:24:59.742952   33541 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0224 01:24:59.770868   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:24:59.783768   33541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:24:59.803437   33541 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:24:59.932318   33541 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:25:00.069995   33541 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:25:00.070035   33541 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:25:00.087933   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:25:00.189178   33541 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:25:01.551351   33541 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.362143972s)
	I0224 01:25:01.551412   33541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:25:01.677931   33541 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:25:01.798231   33541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:25:01.935746   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:25:02.046836   33541 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:25:02.067093   33541 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 01:25:02.067162   33541 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 01:25:02.074272   33541 start.go:553] Will wait 60s for crictl version
	I0224 01:25:02.074326   33541 ssh_runner.go:195] Run: which crictl
	I0224 01:25:02.078094   33541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 01:25:02.208613   33541 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0224 01:25:02.208669   33541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:25:02.248207   33541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:25:02.309074   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.309106   33267 pod_ready.go:81] duration metric: took 14.522040109s waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.309120   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315314   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.315341   33267 pod_ready.go:81] duration metric: took 6.212713ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315353   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319620   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.319637   33267 pod_ready.go:81] duration metric: took 4.27604ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319647   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326775   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.326829   33267 pod_ready.go:81] duration metric: took 7.173792ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326852   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333414   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.333431   33267 pod_ready.go:81] duration metric: took 6.567454ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333439   33267 pod_ready.go:38] duration metric: took 14.559067346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.333460   33267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 01:25:02.355296   33267 ops.go:34] apiserver oom_adj: -16
	I0224 01:25:02.355314   33267 kubeadm.go:637] restartCluster took 57.777507228s
	I0224 01:25:02.355326   33267 kubeadm.go:403] StartCluster complete in 57.85076012s
	I0224 01:25:02.355347   33267 settings.go:142] acquiring lock: {Name:mk174257a2297336a9e6f80080faa7ef819759a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.355426   33267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:25:02.356623   33267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/kubeconfig: {Name:mk7a14c2c6ccf91ba70e9a5ad74574ac5676cf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.356886   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 01:25:02.357022   33267 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0224 01:25:02.357120   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.357180   33267 cache.go:107] acquiring lock: {Name:mk652b3b8459ff39d515b47d5e4228842d267921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:25:02.359382   33267 out.go:177] * Enabled addons: 
	I0224 01:25:02.357243   33267 cache.go:115] /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0224 01:25:02.357743   33267 kapi.go:59] client config for pause-966618: &rest.Config{Host:"https://192.168.50.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:25:02.360819   33267 addons.go:492] enable addons completed in 3.794353ms: enabled=[]
	I0224 01:25:02.360843   33267 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.665817ms
	I0224 01:25:02.360860   33267 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0224 01:25:02.360870   33267 cache.go:87] Successfully saved all images to host disk.
	I0224 01:25:02.361076   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.361456   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.361507   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.364964   33267 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-966618" context rescaled to 1 replicas
	I0224 01:25:02.365000   33267 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 01:25:02.367473   33267 out.go:177] * Verifying Kubernetes components...
	I0224 01:25:02.296415   33541 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0224 01:25:02.296532   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:25:02.300095   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:25:02.300499   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:25:02.300525   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:25:02.300770   33541 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0224 01:25:02.306219   33541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:25:02.322208   33541 localpath.go:92] copying /home/jenkins/minikube-integration/15909-4074/.minikube/client.crt -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.crt
	I0224 01:25:02.322350   33541 localpath.go:117] copying /home/jenkins/minikube-integration/15909-4074/.minikube/client.key -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.key
	I0224 01:25:02.322483   33541 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:25:02.322550   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.352776   33541 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.352790   33541 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:25:02.352852   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.395175   33541 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.395187   33541 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:25:02.395226   33541 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 01:25:02.434875   33541 cni.go:84] Creating CNI manager for ""
	I0224 01:25:02.434889   33541 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:25:02.434905   33541 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 01:25:02.434919   33541 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-394034 NodeName:NoKubernetes-394034 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 01:25:02.435064   33541 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "NoKubernetes-394034"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 01:25:02.435148   33541 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=NoKubernetes-394034 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-394034 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 01:25:02.435187   33541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 01:25:02.447004   33541 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 01:25:02.447063   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 01:25:02.458141   33541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0224 01:25:02.475923   33541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 01:25:02.494749   33541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0224 01:25:02.514183   33541 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0224 01:25:02.518247   33541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:25:02.533651   33541 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034 for IP: 192.168.61.116
	I0224 01:25:02.533674   33541 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.533844   33541 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
	I0224 01:25:02.533899   33541 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
	I0224 01:25:02.534008   33541 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.key
	I0224 01:25:02.534030   33541 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b
	I0224 01:25:02.534042   33541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b with IP's: [192.168.61.116 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 01:25:02.639355   33541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b ...
	I0224 01:25:02.639374   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b: {Name:mk204b8f0fff62839ad3dfce86026dcc237148df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.639590   33541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b ...
	I0224 01:25:02.639599   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b: {Name:mkabcccd3f286ea677f26b21126891a56772d043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.639734   33541 certs.go:333] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt
	I0224 01:25:02.639849   33541 certs.go:337] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key
	I0224 01:25:02.639930   33541 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.key
	I0224 01:25:02.639949   33541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.crt with IP's: []
	I0224 01:24:59.786519   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:59.787027   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:59.787043   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:59.786977   33749 retry.go:31] will retry after 1.638205996s: waiting for machine to come up
	I0224 01:25:01.426608   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:25:01.427220   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:25:01.427244   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:25:01.427100   33749 retry.go:31] will retry after 2.195506564s: waiting for machine to come up
	I0224 01:25:03.624340   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:25:03.624858   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:25:03.624880   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:25:03.624801   33749 retry.go:31] will retry after 3.478404418s: waiting for machine to come up
	I0224 01:25:02.368794   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:02.382200   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0224 01:25:02.382731   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.383282   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.383305   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.383650   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.383801   33267 main.go:141] libmachine: (pause-966618) Calling .GetState
	I0224 01:25:02.386097   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.386152   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.407245   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0224 01:25:02.409582   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.410204   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.410229   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.410659   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.410858   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:25:02.411069   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.411100   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:25:02.415890   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.415936   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:25:02.415959   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.416047   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:25:02.416230   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:25:02.416416   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:25:02.416697   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:25:02.581908   33267 node_ready.go:35] waiting up to 6m0s for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.582172   33267 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0224 01:25:02.585550   33267 node_ready.go:49] node "pause-966618" has status "Ready":"True"
	I0224 01:25:02.585574   33267 node_ready.go:38] duration metric: took 3.634014ms waiting for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.585585   33267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.601775   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.601800   33267 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:25:02.601807   33267 cache_images.go:262] succeeded pushing to: pause-966618
	I0224 01:25:02.601812   33267 cache_images.go:263] failed pushing to: 
	I0224 01:25:02.601832   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.601843   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602141   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602162   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.602171   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.602180   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602914   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602930   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.711320   33267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117011   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.117050   33267 pod_ready.go:81] duration metric: took 405.70699ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117065   33267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508668   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.508692   33267 pod_ready.go:81] duration metric: took 391.619335ms waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508705   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907114   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.907137   33267 pod_ready.go:81] duration metric: took 398.424421ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907149   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306213   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.306238   33267 pod_ready.go:81] duration metric: took 399.079189ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306250   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706570   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.706595   33267 pod_ready.go:81] duration metric: took 400.337777ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706608   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107315   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:05.107347   33267 pod_ready.go:81] duration metric: took 400.730489ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107364   33267 pod_ready.go:38] duration metric: took 2.521766254s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:05.107389   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:25:05.107436   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:25:05.128123   33267 api_server.go:71] duration metric: took 2.763089316s to wait for apiserver process to appear ...
	I0224 01:25:05.128163   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:25:05.128177   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:25:05.135652   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:25:05.136519   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:25:05.136537   33267 api_server.go:130] duration metric: took 8.36697ms to wait for apiserver health ...
	I0224 01:25:05.136546   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:25:05.309967   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:25:05.310003   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.310011   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.310018   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.310024   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.310030   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.310038   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.310045   33267 system_pods.go:74] duration metric: took 173.493465ms to wait for pod list to return data ...
	I0224 01:25:05.310057   33267 default_sa.go:34] waiting for default service account to be created ...
	I0224 01:25:05.506994   33267 default_sa.go:45] found service account: "default"
	I0224 01:25:05.507025   33267 default_sa.go:55] duration metric: took 196.958905ms for default service account to be created ...
	I0224 01:25:05.507048   33267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 01:25:05.709913   33267 system_pods.go:86] 6 kube-system pods found
	I0224 01:25:05.709939   33267 system_pods.go:89] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.709946   33267 system_pods.go:89] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.709953   33267 system_pods.go:89] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.709960   33267 system_pods.go:89] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.709966   33267 system_pods.go:89] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.709972   33267 system_pods.go:89] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.709981   33267 system_pods.go:126] duration metric: took 202.927244ms to wait for k8s-apps to be running ...
	I0224 01:25:05.709993   33267 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 01:25:05.710044   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:05.724049   33267 system_svc.go:56] duration metric: took 14.047173ms WaitForService to wait for kubelet.
	I0224 01:25:05.724072   33267 kubeadm.go:578] duration metric: took 3.359049103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 01:25:05.724093   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:25:05.908529   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:25:05.908554   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:25:05.908564   33267 node_conditions.go:105] duration metric: took 184.464952ms to run NodePressure ...
	I0224 01:25:05.908574   33267 start.go:228] waiting for startup goroutines ...
	I0224 01:25:05.908580   33267 start.go:233] waiting for cluster config update ...
	I0224 01:25:05.908587   33267 start.go:242] writing updated cluster config ...
	I0224 01:25:05.908913   33267 ssh_runner.go:195] Run: rm -f paused
	I0224 01:25:05.962958   33267 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 01:25:05.966906   33267 out.go:177] * Done! kubectl is now configured to use "pause-966618" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-02-24 01:22:46 UTC, ends at Fri 2023-02-24 01:25:06 UTC. --
	Feb 24 01:24:39 pause-966618 dockerd[4255]: time="2023-02-24T01:24:39.047002565Z" level=warning msg="cleanup warnings time=\"2023-02-24T01:24:39Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6660 runtime=io.containerd.runc.v2\n"
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598084033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598197087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598219760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598341706Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/409f89a75e66e33968519196a807970098f00f95c635c62db6ebd81afa67ade8 pid=6909 runtime=io.containerd.runc.v2
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612333627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612401915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612413121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612664644Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4aabdcdde9cf7192024b2d9eb3bc6f2abcb4a59ef91285c8e6682b73e6cc8431 pid=6936 runtime=io.containerd.runc.v2
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616476185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616746230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616900206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.619110954Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/201f4222b6ec37ff5bee9fc1edbf2ef9ca1b1a7350a98b9c64e62107eaff46ac pid=6939 runtime=io.containerd.runc.v2
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248316467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248446314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248455765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.249046552Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/72c6739efe2b9161a1c69abbcab9fc1b036dbeffdf34fa80ad551d801f4490eb pid=7129 runtime=io.containerd.runc.v2
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.543980021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544198182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544359528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544813474Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec57ffc07efea5c80ab62903046ca28a77fad1b2473a6794d0858b9c45b9188f pid=7177 runtime=io.containerd.runc.v2
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119108515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119252738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119282884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119840619Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8338976709e5eb7d94bc2e0aadeff1333eb52459e09462452d1b3489ff4fecdb pid=7330 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8338976709e5e       5185b96f0becf       18 seconds ago       Running             coredns                   2                   72c6739efe2b9
	ec57ffc07efea       46a6bb3c77ce0       19 seconds ago       Running             kube-proxy                2                   4e083861b8719
	4aabdcdde9cf7       655493523f607       25 seconds ago       Running             kube-scheduler            3                   573e5c961fdb1
	201f4222b6ec3       e9c08e11b07f6       25 seconds ago       Running             kube-controller-manager   3                   d8187f234e578
	409f89a75e66e       fce326961ae2d       25 seconds ago       Running             etcd                      3                   6bd59d955c73d
	7fd1d7e9bc798       deb04688c4a35       30 seconds ago       Running             kube-apiserver            2                   b62e8b9c65056
	4fbc20ae5bf7c       fce326961ae2d       45 seconds ago       Exited              etcd                      2                   dfb4d0df0765f
	ad958c9f692bf       e9c08e11b07f6       45 seconds ago       Exited              kube-controller-manager   2                   d6f702db3bf6b
	e8da938203fc1       655493523f607       48 seconds ago       Exited              kube-scheduler            2                   eebafd5971ced
	426996151fbc3       46a6bb3c77ce0       57 seconds ago       Exited              kube-proxy                1                   29450e6f7ba68
	fac1cbcee4196       5185b96f0becf       About a minute ago   Exited              coredns                   1                   4ce5be8f6eb9d
	f3b13c7b26554       deb04688c4a35       About a minute ago   Exited              kube-apiserver            1                   de777fb1d6b37
	
	* 
	* ==> coredns [8338976709e5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:37882 - 55295 "HINFO IN 7038850563798999389.5597014617150123683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019122525s
	
	* 
	* ==> coredns [fac1cbcee419] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52393 - 59882 "HINFO IN 7982410992457572593.3493053231117998011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021473294s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:44406->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-966618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-966618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=pause-966618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T01_23_27_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-966618
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:25:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.59
	  Hostname:    pause-966618
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a7569a111714f25b93fea6b50ae934a
	  System UUID:                6a7569a1-1171-4f25-b93f-ea6b50ae934a
	  Boot ID:                    d4ac2b4a-5dc3-4f7c-8fb2-1c7afd60407d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-5kk6f                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-pause-966618                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 kube-apiserver-pause-966618             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-pause-966618    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-7wlbf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-pause-966618             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-966618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node pause-966618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                kubelet          Node pause-966618 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeReady                98s                kubelet          Node pause-966618 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node pause-966618 event: Registered Node pause-966618 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-966618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-966618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-966618 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-966618 event: Registered Node pause-966618 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.389175] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +0.297463] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.111266] systemd-fstab-generator[944]: Ignoring "noauto" for root device
	[  +0.130662] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +1.531108] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.123090] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.147358] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
	[  +0.128731] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
	[  +4.266481] systemd-fstab-generator[1388]: Ignoring "noauto" for root device
	[  +0.438156] kauditd_printk_skb: 68 callbacks suppressed
	[ +13.834461] systemd-fstab-generator[2130]: Ignoring "noauto" for root device
	[ +14.083667] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.636362] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.758826] systemd-fstab-generator[3528]: Ignoring "noauto" for root device
	[  +0.300612] systemd-fstab-generator[3567]: Ignoring "noauto" for root device
	[  +0.165547] systemd-fstab-generator[3578]: Ignoring "noauto" for root device
	[  +0.163297] systemd-fstab-generator[3591]: Ignoring "noauto" for root device
	[  +5.255153] kauditd_printk_skb: 2 callbacks suppressed
	[Feb24 01:24] systemd-fstab-generator[4603]: Ignoring "noauto" for root device
	[  +0.143453] systemd-fstab-generator[4646]: Ignoring "noauto" for root device
	[  +0.172343] systemd-fstab-generator[4689]: Ignoring "noauto" for root device
	[  +0.161512] systemd-fstab-generator[4710]: Ignoring "noauto" for root device
	[  +2.283866] kauditd_printk_skb: 38 callbacks suppressed
	[ +23.051873] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.853107] systemd-fstab-generator[6743]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [409f89a75e66] <==
	* {"level":"warn","ts":"2023-02-24T01:24:58.779Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"396.147992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-02-24T01:24:58.780Z","caller":"traceutil/trace.go:171","msg":"trace[2068455327] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:448; }","duration":"397.749335ms","start":"2023-02-24T01:24:58.383Z","end":"2023-02-24T01:24:58.780Z","steps":["trace[2068455327] 'agreement among raft nodes before linearized reading'  (duration: 396.111217ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:58.781Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"311.900642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-02-24T01:24:58.781Z","caller":"traceutil/trace.go:171","msg":"trace[172381515] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:448; }","duration":"315.004084ms","start":"2023-02-24T01:24:58.466Z","end":"2023-02-24T01:24:58.781Z","steps":["trace[172381515] 'agreement among raft nodes before linearized reading'  (duration: 311.832922ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:58.781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.466Z","time spent":"315.044393ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":236,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"warn","ts":"2023-02-24T01:24:58.782Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.383Z","time spent":"398.051633ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"info","ts":"2023-02-24T01:24:59.095Z","caller":"traceutil/trace.go:171","msg":"trace[924256197] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"306.110554ms","start":"2023-02-24T01:24:58.789Z","end":"2023-02-24T01:24:59.095Z","steps":["trace[924256197] 'read index received'  (duration: 223.1288ms)","trace[924256197] 'applied index is now lower than readState.Index'  (duration: 82.981039ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-24T01:24:59.095Z","caller":"traceutil/trace.go:171","msg":"trace[351778279] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"308.37778ms","start":"2023-02-24T01:24:58.787Z","end":"2023-02-24T01:24:59.095Z","steps":["trace[351778279] 'process raft request'  (duration: 225.367671ms)","trace[351778279] 'compare'  (duration: 82.48645ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.787Z","time spent":"308.44389ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6210,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" mod_revision:447 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" value_size:6139 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" > >"}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"306.35986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-02-24T01:24:59.096Z","caller":"traceutil/trace.go:171","msg":"trace[530318559] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:449; }","duration":"306.400764ms","start":"2023-02-24T01:24:58.789Z","end":"2023-02-24T01:24:59.096Z","steps":["trace[530318559] 'agreement among raft nodes before linearized reading'  (duration: 306.297962ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.789Z","time spent":"306.435241ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":236,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2023-02-24T01:24:59.218Z","caller":"traceutil/trace.go:171","msg":"trace[1957925680] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"115.525478ms","start":"2023-02-24T01:24:59.103Z","end":"2023-02-24T01:24:59.218Z","steps":["trace[1957925680] 'process raft request'  (duration: 115.468773ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T01:24:59.219Z","caller":"traceutil/trace.go:171","msg":"trace[777955538] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"428.473795ms","start":"2023-02-24T01:24:58.791Z","end":"2023-02-24T01:24:59.219Z","steps":["trace[777955538] 'process raft request'  (duration: 422.506505ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.220Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.791Z","time spent":"428.574339ms","remote":"127.0.0.1:49802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:359 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2023-02-24T01:24:59.220Z","caller":"traceutil/trace.go:171","msg":"trace[1611141453] linearizableReadLoop","detail":"{readStateIndex:489; appliedIndex:488; }","duration":"124.607146ms","start":"2023-02-24T01:24:59.095Z","end":"2023-02-24T01:24:59.220Z","steps":["trace[1611141453] 'read index received'  (duration: 117.686257ms)","trace[1611141453] 'applied index is now lower than readState.Index'  (duration: 6.919668ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:24:59.221Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"430.380087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-02-24T01:24:59.221Z","caller":"traceutil/trace.go:171","msg":"trace[1153661980] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:451; }","duration":"430.670167ms","start":"2023-02-24T01:24:58.790Z","end":"2023-02-24T01:24:59.221Z","steps":["trace[1153661980] 'agreement among raft nodes before linearized reading'  (duration: 429.639975ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.790Z","time spent":"430.712799ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"warn","ts":"2023-02-24T01:24:59.222Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"333.565172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4011"}
	{"level":"info","ts":"2023-02-24T01:24:59.226Z","caller":"traceutil/trace.go:171","msg":"trace[1371051409] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:451; }","duration":"338.29311ms","start":"2023-02-24T01:24:58.888Z","end":"2023-02-24T01:24:59.226Z","steps":["trace[1371051409] 'agreement among raft nodes before linearized reading'  (duration: 333.512656ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.227Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.888Z","time spent":"338.449423ms","remote":"127.0.0.1:49878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4033,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-02-24T01:24:59.222Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"421.117462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-966618\" ","response":"range_response_count:1 size:5470"}
	{"level":"info","ts":"2023-02-24T01:24:59.227Z","caller":"traceutil/trace.go:171","msg":"trace[1267808777] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-966618; range_end:; response_count:1; response_revision:451; }","duration":"426.410685ms","start":"2023-02-24T01:24:58.801Z","end":"2023-02-24T01:24:59.227Z","steps":["trace[1267808777] 'agreement among raft nodes before linearized reading'  (duration: 421.051096ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.227Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.801Z","time spent":"426.593622ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5492,"request content":"key:\"/registry/pods/kube-system/etcd-pause-966618\" "}
	
	* 
	* ==> etcd [4fbc20ae5bf7] <==
	* {"level":"info","ts":"2023-02-24T01:24:22.398Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"602de89049e69a5d","initial-advertise-peer-urls":["https://192.168.50.59:2380"],"listen-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:22.477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d received MsgPreVoteResp from 602de89049e69a5d at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became candidate at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d received MsgVoteResp from 602de89049e69a5d at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became leader at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602de89049e69a5d elected leader 602de89049e69a5d at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.484Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"602de89049e69a5d","local-member-attributes":"{Name:pause-966618 ClientURLs:[https://192.168.50.59:2379]}","request-path":"/0/members/602de89049e69a5d/attributes","cluster-id":"47ab5ca4b9a8bf42","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:24:22.484Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:24:22.486Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.59:2379"}
	{"level":"info","ts":"2023-02-24T01:24:22.486Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:24:22.488Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T01:24:22.498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:24:22.498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:24:33.984Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-24T01:24:33.984Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-966618","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	{"level":"info","ts":"2023-02-24T01:24:33.988Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602de89049e69a5d","current-leader-member-id":"602de89049e69a5d"}
	{"level":"info","ts":"2023-02-24T01:24:33.991Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:33.993Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:33.993Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-966618","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	
	* 
	* ==> kernel <==
	*  01:25:07 up 2 min,  0 users,  load average: 1.28, 0.66, 0.26
	Linux pause-966618 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7fd1d7e9bc79] <==
	* I0224 01:24:45.277420       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0224 01:24:45.277438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0224 01:24:45.277452       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0224 01:24:45.282226       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0224 01:24:45.282272       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0224 01:24:45.393051       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0224 01:24:45.406590       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0224 01:24:45.431607       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0224 01:24:45.462851       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 01:24:45.464389       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 01:24:45.464884       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 01:24:45.465041       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 01:24:45.477336       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 01:24:45.477430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 01:24:45.482373       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 01:24:45.487609       1 cache.go:39] Caches are synced for autoregister controller
	I0224 01:24:46.017922       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 01:24:46.283818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:24:47.060948       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:24:47.075700       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:24:47.148726       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:24:47.186879       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:24:47.205326       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 01:24:58.789355       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 01:24:59.102212       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [f3b13c7b2655] <==
	* W0224 01:24:16.941998       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:24:17.122908       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:24:20.774303       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0224 01:24:25.948672       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [201f4222b6ec] <==
	* I0224 01:24:58.335249       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0224 01:24:58.335299       1 shared_informer.go:280] Caches are synced for cronjob
	I0224 01:24:58.335334       1 shared_informer.go:280] Caches are synced for ephemeral
	I0224 01:24:58.336478       1 shared_informer.go:280] Caches are synced for disruption
	I0224 01:24:58.340321       1 shared_informer.go:280] Caches are synced for PV protection
	I0224 01:24:58.345378       1 shared_informer.go:280] Caches are synced for stateful set
	I0224 01:24:58.349859       1 shared_informer.go:280] Caches are synced for node
	I0224 01:24:58.349914       1 range_allocator.go:167] Sending events to api server.
	I0224 01:24:58.349929       1 range_allocator.go:171] Starting range CIDR allocator
	I0224 01:24:58.349933       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0224 01:24:58.349940       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0224 01:24:58.351214       1 shared_informer.go:280] Caches are synced for namespace
	I0224 01:24:58.351291       1 shared_informer.go:280] Caches are synced for certificate-csrapproving
	I0224 01:24:58.354562       1 shared_informer.go:280] Caches are synced for service account
	I0224 01:24:58.358699       1 shared_informer.go:280] Caches are synced for job
	I0224 01:24:58.358967       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0224 01:24:58.369950       1 shared_informer.go:280] Caches are synced for deployment
	I0224 01:24:58.385823       1 shared_informer.go:280] Caches are synced for HPA
	I0224 01:24:58.428974       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0224 01:24:58.440564       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:24:58.463937       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0224 01:24:58.510160       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:24:58.884872       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:24:58.886233       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:24:58.886394       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ad958c9f692b] <==
	* I0224 01:24:22.903296       1 serving.go:348] Generated self-signed cert in-memory
	I0224 01:24:23.288434       1 controllermanager.go:182] Version: v1.26.1
	I0224 01:24:23.288645       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:23.289776       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0224 01:24:23.289998       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0224 01:24:23.290033       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0224 01:24:23.290299       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [426996151fbc] <==
	* E0224 01:24:19.236934       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": net/http: TLS handshake timeout
	E0224 01:24:26.955373       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.59:52556->192.168.50.59:8443: read: connection reset by peer
	E0224 01:24:29.025061       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.214153       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [ec57ffc07efe] <==
	* I0224 01:24:47.701077       1 node.go:163] Successfully retrieved node IP: 192.168.50.59
	I0224 01:24:47.701185       1 server_others.go:109] "Detected node IP" address="192.168.50.59"
	I0224 01:24:47.701244       1 server_others.go:535] "Using iptables proxy"
	I0224 01:24:47.754176       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0224 01:24:47.754223       1 server_others.go:176] "Using iptables Proxier"
	I0224 01:24:47.754265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 01:24:47.754668       1 server.go:655] "Version info" version="v1.26.1"
	I0224 01:24:47.754706       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:47.755405       1 config.go:317] "Starting service config controller"
	I0224 01:24:47.755464       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 01:24:47.755578       1 config.go:226] "Starting endpoint slice config controller"
	I0224 01:24:47.755611       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 01:24:47.756163       1 config.go:444] "Starting node config controller"
	I0224 01:24:47.756200       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 01:24:47.855698       1 shared_informer.go:280] Caches are synced for service config
	I0224 01:24:47.856103       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 01:24:47.857614       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4aabdcdde9cf] <==
	* I0224 01:24:42.721262       1 serving.go:348] Generated self-signed cert in-memory
	W0224 01:24:45.285027       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 01:24:45.285102       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:24:45.285116       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 01:24:45.285127       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 01:24:45.372919       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0224 01:24:45.372973       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:45.380469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 01:24:45.381072       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:24:45.381829       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 01:24:45.384475       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 01:24:45.483024       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e8da938203fc] <==
	* W0224 01:24:30.995321       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.50.59:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:30.995404       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.59:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.056334       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.50.59:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.056430       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.59:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.136416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.50.59:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.136565       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.59:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.334273       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.50.59:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.334372       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.59:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.335819       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.50.59:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.335924       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.59:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.367835       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.50.59:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.367912       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.59:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.417773       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.50.59:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.417833       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.59:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.556026       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.556401       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:33.420873       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.59:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.420987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.59:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:33.849309       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.849403       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:33.988126       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0224 01:24:33.988445       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:24:33.988551       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 01:24:33.988748       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0224 01:24:33.988912       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-02-24 01:22:46 UTC, ends at Fri 2023-02-24 01:25:07 UTC. --
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.459367    6749 scope.go:115] "RemoveContainer" containerID="4fbc20ae5bf7c11474da8994a88ff02eebe65f7997961567f7acd8a88f6b70e4"
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.483233    6749 scope.go:115] "RemoveContainer" containerID="ad958c9f692bfe0b733272d6b661b68c16ddf38e0d88c0bbaeac5d37efb4e6be"
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.494680    6749 scope.go:115] "RemoveContainer" containerID="e8da938203fc176f8393b15585f7d1a66c7833ee712276046b0abc360284ce20"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: E0224 01:24:45.455274    6749 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-966618\" already exists" pod="kube-system/kube-scheduler-pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.477059    6749 kubelet_node_status.go:108] "Node was previously registered" node="pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.477795    6749 kubelet_node_status.go:73] "Successfully registered node" node="pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.488588    6749 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.489897    6749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.882635    6749 apiserver.go:52] "Watching apiserver"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.885364    6749 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.885462    6749 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.915998    6749 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.953848    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98036b9d-4a03-4d42-9f71-28b8df888be5-xtables-lock\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954076    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpqk\" (UniqueName: \"kubernetes.io/projected/864031ca-0190-46a6-9191-bed0ab15761f-kube-api-access-bzpqk\") pod \"coredns-787d4945fb-5kk6f\" (UID: \"864031ca-0190-46a6-9191-bed0ab15761f\") " pod="kube-system/coredns-787d4945fb-5kk6f"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954251    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864031ca-0190-46a6-9191-bed0ab15761f-config-volume\") pod \"coredns-787d4945fb-5kk6f\" (UID: \"864031ca-0190-46a6-9191-bed0ab15761f\") " pod="kube-system/coredns-787d4945fb-5kk6f"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954425    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98036b9d-4a03-4d42-9f71-28b8df888be5-lib-modules\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954675    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvzk\" (UniqueName: \"kubernetes.io/projected/98036b9d-4a03-4d42-9f71-28b8df888be5-kube-api-access-xrvzk\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954776    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98036b9d-4a03-4d42-9f71-28b8df888be5-kube-proxy\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954830    6749 reconciler.go:41] "Reconciler: start to sync state"
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.189353    6749 request.go:690] Waited for 1.131451949s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.386628    6749 scope.go:115] "RemoveContainer" containerID="426996151fbc3fd63eb00bba1849a621ea6f53c215166768e47eada4bbf6243f"
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.937765    6749 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c6739efe2b9161a1c69abbcab9fc1b036dbeffdf34fa80ad551d801f4490eb"
	Feb 24 01:24:49 pause-966618 kubelet[6749]: I0224 01:24:49.986140    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 01:24:50 pause-966618 kubelet[6749]: I0224 01:24:50.995334    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 01:24:53 pause-966618 kubelet[6749]: I0224 01:24:53.800830    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-966618 -n pause-966618
helpers_test.go:261: (dbg) Run:  kubectl --context pause-966618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-966618 -n pause-966618
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-966618 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-966618 logs -n 25: (1.019710411s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                      Args                      |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC |                     |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:21 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-676643                      | running-upgrade-676643    | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:20 UTC |
	| start   | -p force-systemd-env-457559                    | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:20 UTC | 24 Feb 23 01:22 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-273695                      | stopped-upgrade-273695    | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	| start   | -p cert-expiration-843515                      | cert-expiration-843515    | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:22 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| cache   | gvisor-694714 cache add                        | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	|         | gcr.io/k8s-minikube/gvisor-addon:2             |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-707178                   | kubernetes-upgrade-707178 | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:21 UTC |
	| start   | -p docker-flags-724706                         | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:23 UTC |
	|         | --cache-images=false                           |                           |         |         |                     |                     |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=false                                   |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                           |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                           |                           |         |         |                     |                     |
	|         | --docker-opt=debug                             |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| addons  | gvisor-694714 addons enable                    | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:21 UTC | 24 Feb 23 01:22 UTC |
	|         | gvisor                                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-457559                       | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:22 UTC |
	|         | ssh docker info --format                       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-457559                    | force-systemd-env-457559  | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:22 UTC |
	| start   | -p pause-966618 --memory=2048                  | pause-966618              | jenkins | v1.29.0 | 24 Feb 23 01:22 UTC | 24 Feb 23 01:23 UTC |
	|         | --install-addons=false                         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                       |                           |         |         |                     |                     |
	| stop    | -p gvisor-694714                               | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:24 UTC |
	| ssh     | docker-flags-724706 ssh                        | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=Environment                         |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-724706 ssh                        | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=ExecStart                           |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| delete  | -p docker-flags-724706                         | docker-flags-724706       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:23 UTC |
	| start   | -p cert-options-398057                         | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:24 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p pause-966618                                | pause-966618              | jenkins | v1.29.0 | 24 Feb 23 01:23 UTC | 24 Feb 23 01:25 UTC |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | cert-options-398057 ssh                        | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	|         | openssl x509 -text -noout -in                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-398057 -- sudo                 | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	|         | cat /etc/kubernetes/admin.conf                 |                           |         |         |                     |                     |
	| delete  | -p cert-options-398057                         | cert-options-398057       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC | 24 Feb 23 01:24 UTC |
	| start   | -p NoKubernetes-394034                         | NoKubernetes-394034       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --no-kubernetes                                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                      |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-394034                         | NoKubernetes-394034       | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p gvisor-694714 --memory=2200                 | gvisor-694714             | jenkins | v1.29.0 | 24 Feb 23 01:24 UTC |                     |
	|         | --container-runtime=containerd --docker-opt    |                           |         |         |                     |                     |
	|         | containerd=/var/run/containerd/containerd.sock |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 01:24:34
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 01:24:34.466247   33676 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:24:34.466376   33676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:24:34.466380   33676 out.go:309] Setting ErrFile to fd 2...
	I0224 01:24:34.466385   33676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:24:34.466525   33676 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:24:34.467084   33676 out.go:303] Setting JSON to false
	I0224 01:24:34.467946   33676 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4024,"bootTime":1677197851,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 01:24:34.467993   33676 start.go:135] virtualization: kvm guest
	I0224 01:24:34.470329   33676 out.go:177] * [gvisor-694714] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 01:24:34.471671   33676 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 01:24:34.471671   33676 notify.go:220] Checking for updates...
	I0224 01:24:34.473831   33676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 01:24:34.475158   33676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:24:34.476343   33676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 01:24:34.477579   33676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 01:24:34.478854   33676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 01:24:34.480493   33676 config.go:182] Loaded profile config "gvisor-694714": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0224 01:24:34.481000   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:34.481065   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:34.495604   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36961
	I0224 01:24:34.496309   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:34.496902   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:34.496919   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:34.497435   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:34.497678   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:34.497879   33676 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 01:24:34.498279   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:34.498308   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:34.512385   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0224 01:24:34.512720   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:34.513148   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:34.513167   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:34.513424   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:34.513601   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:34.547588   33676 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 01:24:34.548732   33676 start.go:296] selected driver: kvm2
	I0224 01:24:34.548736   33676 start.go:857] validating driver "kvm2" against &{Name:gvisor-694714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUse
r:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:gvisor-694714 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true gvisor:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:24:34.548841   33676 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 01:24:34.549376   33676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:24:34.549437   33676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 01:24:34.562930   33676 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0224 01:24:34.563200   33676 cni.go:84] Creating CNI manager for ""
	I0224 01:24:34.563212   33676 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0224 01:24:34.563219   33676 start_flags.go:319] config:
	{Name:gvisor-694714 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[containerd=/var/run/containerd/containerd.sock] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterN
ame:gvisor-694714 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true gvisor:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:24:34.563314   33676 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:24:34.564989   33676 out.go:177] * Starting control plane node gvisor-694714 in cluster gvisor-694714
	I0224 01:24:32.120974   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:32.121579   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:32.121616   33267 retry.go:31] will retry after 1.661532498s: state is "Stopped"
	I0224 01:24:33.784375   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:33.785016   33267 api_server.go:268] stopped: https://192.168.50.59:8443/healthz: Get "https://192.168.50.59:8443/healthz": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:33.785071   33267 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 01:24:33.785080   33267 kubeadm.go:1120] stopping kube-system containers ...
	I0224 01:24:33.785148   33267 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:24:33.815898   33267 docker.go:456] Stopping containers: [4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d]
	I0224 01:24:33.815973   33267 ssh_runner.go:195] Run: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d
	I0224 01:24:32.744696   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:32.745177   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:32.745192   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:32.745145   33563 retry.go:31] will retry after 2.669961808s: waiting for machine to come up
	I0224 01:24:35.418769   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:35.419173   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:35.419196   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:35.419125   33563 retry.go:31] will retry after 3.056903471s: waiting for machine to come up
	I0224 01:24:34.566141   33676 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime containerd
	I0224 01:24:34.566195   33676 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4
	I0224 01:24:34.566210   33676 cache.go:57] Caching tarball of preloaded images
	I0224 01:24:34.566308   33676 preload.go:174] Found /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 01:24:34.566316   33676 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on containerd
	I0224 01:24:34.566468   33676 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/config.json ...
	I0224 01:24:34.566679   33676 cache.go:193] Successfully downloaded all kic artifacts
	I0224 01:24:34.566710   33676 start.go:364] acquiring machines lock for gvisor-694714: {Name:mk99c679472abf655c2223ea7db4ce727d2ab6ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0224 01:24:39.082069   33267 ssh_runner.go:235] Completed: docker stop 4fbc20ae5bf7 ad958c9f692b e8da938203fc 426996151fbc 29450e6f7ba6 fac1cbcee419 4ce5be8f6eb9 f3b13c7b2655 de777fb1d6b3 eebafd5971ce dfb4d0df0765 d6f702db3bf6 4f657627f345 99c7798b80b0 228dc753c532 512bb7879a7d 280d511aab5e c0eb86ebedf9 6e49e0ea3699 063bb6925a9d 56e99b7105e1 37f4903be34d: (5.266065396s)
	I0224 01:24:39.082132   33267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 01:24:39.126516   33267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 01:24:39.136579   33267 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 24 01:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Feb 24 01:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Feb 24 01:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Feb 24 01:23 /etc/kubernetes/scheduler.conf
	
	I0224 01:24:39.136646   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 01:24:39.145075   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 01:24:39.153171   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.161688   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.161744   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 01:24:39.169623   33267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 01:24:39.177326   33267 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 01:24:39.177391   33267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 01:24:39.185537   33267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194208   33267 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 01:24:39.194230   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:39.320624   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.517893   33267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.197229724s)
	I0224 01:24:40.517927   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.735504   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.833361   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:40.950707   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:24:40.950773   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:24:40.964974   33267 api_server.go:71] duration metric: took 14.2683ms to wait for apiserver process to appear ...
	I0224 01:24:40.965001   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:24:40.965013   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:38.477199   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:38.477593   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:38.477616   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:38.477544   33563 retry.go:31] will retry after 3.07394937s: waiting for machine to come up
	I0224 01:24:41.554657   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:41.555172   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find current IP address of domain NoKubernetes-394034 in network mk-NoKubernetes-394034
	I0224 01:24:41.555188   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | I0224 01:24:41.555138   33563 retry.go:31] will retry after 4.525311684s: waiting for machine to come up
	I0224 01:24:45.376650   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 01:24:45.376682   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 01:24:45.876934   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:45.882491   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:45.882516   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.376861   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.382503   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 01:24:46.382529   33267 api_server.go:102] status: https://192.168.50.59:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 01:24:46.877091   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:24:46.882591   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:24:46.891863   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:24:46.891882   33267 api_server.go:130] duration metric: took 5.926875892s to wait for apiserver health ...
	I0224 01:24:46.891891   33267 cni.go:84] Creating CNI manager for ""
	I0224 01:24:46.891900   33267 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:24:46.893673   33267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 01:24:46.894663   33267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 01:24:46.905495   33267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 01:24:46.084008   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.084556   33541 main.go:141] libmachine: (NoKubernetes-394034) Found IP for machine: 192.168.61.116
	I0224 01:24:46.084573   33541 main.go:141] libmachine: (NoKubernetes-394034) Reserving static IP address...
	I0224 01:24:46.084588   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has current primary IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.085009   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find host DHCP lease matching {name: "NoKubernetes-394034", mac: "52:54:00:9f:50:8f", ip: "192.168.61.116"} in network mk-NoKubernetes-394034
	I0224 01:24:46.160376   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Getting to WaitForSSH function...
	I0224 01:24:46.160398   33541 main.go:141] libmachine: (NoKubernetes-394034) Reserved static IP address: 192.168.61.116
	I0224 01:24:46.160411   33541 main.go:141] libmachine: (NoKubernetes-394034) Waiting for SSH to be available...
	I0224 01:24:46.163070   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:46.163328   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034
	I0224 01:24:46.163344   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | unable to find defined IP address of network mk-NoKubernetes-394034 interface with MAC address 52:54:00:9f:50:8f
	I0224 01:24:46.163567   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH client type: external
	I0224 01:24:46.163589   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa (-rw-------)
	I0224 01:24:46.163615   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:24:46.163625   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | About to run SSH command:
	I0224 01:24:46.163635   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | exit 0
	I0224 01:24:46.167456   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | SSH cmd err, output: exit status 255: 
	I0224 01:24:46.167472   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0224 01:24:46.167483   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | command : exit 0
	I0224 01:24:46.167498   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | err     : exit status 255
	I0224 01:24:46.167510   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | output  : 
	I0224 01:24:51.354223   33676 start.go:368] acquired machines lock for "gvisor-694714" in 16.787477726s
	I0224 01:24:51.354257   33676 start.go:96] Skipping create...Using existing machine configuration
	I0224 01:24:51.354262   33676 fix.go:55] fixHost starting: 
	I0224 01:24:51.354651   33676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:24:51.354692   33676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:24:51.373159   33676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0224 01:24:51.373629   33676 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:24:51.374176   33676 main.go:141] libmachine: Using API Version  1
	I0224 01:24:51.374199   33676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:24:51.374548   33676 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:24:51.374731   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	I0224 01:24:51.374905   33676 main.go:141] libmachine: (gvisor-694714) Calling .GetState
	I0224 01:24:51.376294   33676 fix.go:103] recreateIfNeeded on gvisor-694714: state=Stopped err=<nil>
	I0224 01:24:51.376325   33676 main.go:141] libmachine: (gvisor-694714) Calling .DriverName
	W0224 01:24:51.376485   33676 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 01:24:51.378584   33676 out.go:177] * Restarting existing kvm2 VM for "gvisor-694714" ...
	I0224 01:24:46.921751   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:24:46.934126   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:24:46.934148   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:24:46.934153   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:24:46.934157   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:24:46.934162   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:24:46.934168   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0224 01:24:46.934172   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:24:46.934177   33267 system_pods.go:74] duration metric: took 12.408948ms to wait for pod list to return data ...
	I0224 01:24:46.934186   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:24:46.937710   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:24:46.937732   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:24:46.937740   33267 node_conditions.go:105] duration metric: took 3.549963ms to run NodePressure ...
	I0224 01:24:46.937755   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 01:24:47.235472   33267 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0224 01:24:47.240426   33267 retry.go:31] will retry after 218.964762ms: kubelet not initialised
	I0224 01:24:47.465916   33267 retry.go:31] will retry after 301.879234ms: kubelet not initialised
	I0224 01:24:47.774334   33267 kubeadm.go:784] kubelet initialised
	I0224 01:24:47.774357   33267 kubeadm.go:785] duration metric: took 538.862246ms waiting for restarted kubelet to initialise ...
	I0224 01:24:47.774363   33267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:24:47.779055   33267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787041   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:24:47.787053   33267 pod_ready.go:81] duration metric: took 7.980412ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:47.787060   33267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:24:49.808529   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:51.813126   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:49.167782   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Getting to WaitForSSH function...
	I0224 01:24:49.170422   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.170837   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.170854   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.170974   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH client type: external
	I0224 01:24:49.171011   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa (-rw-------)
	I0224 01:24:49.171032   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0224 01:24:49.171037   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | About to run SSH command:
	I0224 01:24:49.171044   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | exit 0
	I0224 01:24:49.257572   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | SSH cmd err, output: <nil>: 
	I0224 01:24:49.257865   33541 main.go:141] libmachine: (NoKubernetes-394034) KVM machine creation complete!
	I0224 01:24:49.258204   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetConfigRaw
	I0224 01:24:49.258689   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:49.258935   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:49.259088   33541 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0224 01:24:49.259100   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetState
	I0224 01:24:49.260539   33541 main.go:141] libmachine: Detecting operating system of created instance...
	I0224 01:24:49.260549   33541 main.go:141] libmachine: Waiting for SSH to be available...
	I0224 01:24:49.260556   33541 main.go:141] libmachine: Getting to WaitForSSH function...
	I0224 01:24:49.260565   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.262852   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.263193   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.263216   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.263364   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.263528   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.263681   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.263813   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.263948   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.264352   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.264358   33541 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0224 01:24:49.376440   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:24:49.376455   33541 main.go:141] libmachine: Detecting the provisioner...
	I0224 01:24:49.376461   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.379231   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.379560   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.379580   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.379696   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.379868   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.379969   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.380090   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.380235   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.380613   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.380621   33541 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0224 01:24:49.494239   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g41e8300-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0224 01:24:49.494291   33541 main.go:141] libmachine: found compatible host: buildroot
	I0224 01:24:49.494296   33541 main.go:141] libmachine: Provisioning with buildroot...
	I0224 01:24:49.494303   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.494526   33541 buildroot.go:166] provisioning hostname "NoKubernetes-394034"
	I0224 01:24:49.494561   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.494731   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.497224   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.497585   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.497605   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.497753   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.497892   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.498040   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.498175   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.498313   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.498777   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.498785   33541 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-394034 && echo "NoKubernetes-394034" | sudo tee /etc/hostname
	I0224 01:24:49.623459   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-394034
	
	I0224 01:24:49.623473   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.626153   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.626480   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.626496   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.626651   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.626808   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.626952   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.627110   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.627252   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:49.627713   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:49.627725   33541 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-394034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-394034/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-394034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 01:24:49.753972   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 01:24:49.753986   33541 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-4074/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-4074/.minikube}
	I0224 01:24:49.753998   33541 buildroot.go:174] setting up certificates
	I0224 01:24:49.754006   33541 provision.go:83] configureAuth start
	I0224 01:24:49.754016   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetMachineName
	I0224 01:24:49.754296   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:49.756940   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.757313   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.757338   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.757523   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.759863   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.760217   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.760240   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.760367   33541 provision.go:138] copyHostCerts
	I0224 01:24:49.760425   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem, removing ...
	I0224 01:24:49.760431   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem
	I0224 01:24:49.760507   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/key.pem (1679 bytes)
	I0224 01:24:49.760626   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem, removing ...
	I0224 01:24:49.760632   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem
	I0224 01:24:49.760669   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/ca.pem (1078 bytes)
	I0224 01:24:49.760727   33541 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem, removing ...
	I0224 01:24:49.760730   33541 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem
	I0224 01:24:49.760750   33541 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-4074/.minikube/cert.pem (1123 bytes)
	I0224 01:24:49.760796   33541 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-394034 san=[192.168.61.116 192.168.61.116 localhost 127.0.0.1 minikube NoKubernetes-394034]
	I0224 01:24:49.919118   33541 provision.go:172] copyRemoteCerts
	I0224 01:24:49.919175   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 01:24:49.919203   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:49.922478   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.922840   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:49.922865   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:49.923121   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:49.923314   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:49.923500   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:49.923662   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:50.015124   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0224 01:24:50.040165   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 01:24:50.062596   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 01:24:50.085045   33541 provision.go:86] duration metric: configureAuth took 331.027157ms
	I0224 01:24:50.085062   33541 buildroot.go:189] setting minikube options for container-runtime
	I0224 01:24:50.085263   33541 config.go:182] Loaded profile config "NoKubernetes-394034": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:24:50.085284   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:50.085580   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.088430   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.088780   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.088799   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.088944   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.089127   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.089322   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.089502   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.089672   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.090054   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.090061   33541 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 01:24:50.210906   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0224 01:24:50.210917   33541 buildroot.go:70] root file system type: tmpfs
	I0224 01:24:50.211010   33541 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 01:24:50.211029   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.213433   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.213772   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.213791   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.213956   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.214135   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.214261   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.214406   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.214526   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.214910   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.214960   33541 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 01:24:50.343633   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 01:24:50.343650   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:50.346522   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.346984   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:50.347006   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:50.347191   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:50.347370   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.347597   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:50.347731   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:50.347925   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:50.348454   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:50.348466   33541 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 01:24:51.087401   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0224 01:24:51.087415   33541 main.go:141] libmachine: Checking connection to Docker...
	I0224 01:24:51.087425   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetURL
	I0224 01:24:51.088669   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | Using libvirt version 6000000
	I0224 01:24:51.091014   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.091429   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.091453   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.091678   33541 main.go:141] libmachine: Docker is up and running!
	I0224 01:24:51.091690   33541 main.go:141] libmachine: Reticulating splines...
	I0224 01:24:51.091696   33541 client.go:171] LocalClient.Create took 28.315438502s
	I0224 01:24:51.091726   33541 start.go:167] duration metric: libmachine.API.Create for "NoKubernetes-394034" took 28.315490526s
	I0224 01:24:51.091733   33541 start.go:300] post-start starting for "NoKubernetes-394034" (driver="kvm2")
	I0224 01:24:51.091740   33541 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 01:24:51.091757   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.091996   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 01:24:51.092023   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.094384   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.094743   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.094762   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.094893   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.095091   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.095232   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.095377   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.190462   33541 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 01:24:51.195373   33541 info.go:137] Remote host: Buildroot 2021.02.12
	I0224 01:24:51.195388   33541 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/addons for local assets ...
	I0224 01:24:51.195447   33541 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-4074/.minikube/files for local assets ...
	I0224 01:24:51.195512   33541 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem -> 111312.pem in /etc/ssl/certs
	I0224 01:24:51.195582   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 01:24:51.203334   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:24:51.232155   33541 start.go:303] post-start completed in 140.408799ms
	I0224 01:24:51.232193   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetConfigRaw
	I0224 01:24:51.232796   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:51.235822   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.236191   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.236214   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.236454   33541 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/config.json ...
	I0224 01:24:51.236620   33541 start.go:128] duration metric: createHost completed in 28.478153853s
	I0224 01:24:51.236635   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.238989   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.239301   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.239323   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.239465   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.239681   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.239845   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.239985   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.240140   33541 main.go:141] libmachine: Using SSH client type: native
	I0224 01:24:51.240512   33541 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.61.116 22 <nil> <nil>}
	I0224 01:24:51.240518   33541 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0224 01:24:51.354094   33541 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677201891.343518188
	
	I0224 01:24:51.354106   33541 fix.go:207] guest clock: 1677201891.343518188
	I0224 01:24:51.354113   33541 fix.go:220] Guest: 2023-02-24 01:24:51.343518188 +0000 UTC Remote: 2023-02-24 01:24:51.236625205 +0000 UTC m=+28.585897458 (delta=106.892983ms)
	I0224 01:24:51.354127   33541 fix.go:191] guest clock delta is within tolerance: 106.892983ms
	I0224 01:24:51.354130   33541 start.go:83] releasing machines lock for "NoKubernetes-394034", held for 28.595744098s
	I0224 01:24:51.354152   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.354418   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:24:51.357107   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.357439   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.357484   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.357597   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358089   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358275   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .DriverName
	I0224 01:24:51.358361   33541 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 01:24:51.358393   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.358457   33541 ssh_runner.go:195] Run: cat /version.json
	I0224 01:24:51.358467   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHHostname
	I0224 01:24:51.360925   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361237   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.361259   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361330   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361389   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.361558   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.361664   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.361783   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.361799   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:24:51.361823   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:24:51.361950   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHPort
	I0224 01:24:51.362074   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHKeyPath
	I0224 01:24:51.362166   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetSSHUsername
	I0224 01:24:51.362242   33541 sshutil.go:53] new ssh client: &{IP:192.168.61.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/NoKubernetes-394034/id_rsa Username:docker}
	I0224 01:24:51.466956   33541 ssh_runner.go:195] Run: systemctl --version
	I0224 01:24:51.472400   33541 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 01:24:51.477784   33541 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 01:24:51.477819   33541 ssh_runner.go:195] Run: which cri-dockerd
	I0224 01:24:51.481497   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 01:24:51.491458   33541 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 01:24:51.507638   33541 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 01:24:51.524902   33541 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0224 01:24:51.524913   33541 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:24:51.525001   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:24:51.554887   33541 docker.go:630] Got preloaded images: 
	I0224 01:24:51.554901   33541 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0224 01:24:51.554944   33541 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:24:51.564967   33541 ssh_runner.go:195] Run: which lz4
	I0224 01:24:51.568643   33541 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0224 01:24:51.572602   33541 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0224 01:24:51.572623   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0224 01:24:51.379791   33676 main.go:141] libmachine: (gvisor-694714) Calling .Start
	I0224 01:24:51.379930   33676 main.go:141] libmachine: (gvisor-694714) Ensuring networks are active...
	I0224 01:24:51.380621   33676 main.go:141] libmachine: (gvisor-694714) Ensuring network default is active
	I0224 01:24:51.380975   33676 main.go:141] libmachine: (gvisor-694714) Ensuring network mk-gvisor-694714 is active
	I0224 01:24:51.381306   33676 main.go:141] libmachine: (gvisor-694714) Getting domain xml...
	I0224 01:24:51.382028   33676 main.go:141] libmachine: (gvisor-694714) Creating domain...
	I0224 01:24:52.991884   33676 main.go:141] libmachine: (gvisor-694714) Waiting to get IP...
	I0224 01:24:52.992951   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:52.993372   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:52.993497   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:52.993356   33749 retry.go:31] will retry after 246.858858ms: waiting for machine to come up
	I0224 01:24:53.242127   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.242684   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.242708   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.242622   33749 retry.go:31] will retry after 317.124816ms: waiting for machine to come up
	I0224 01:24:53.561186   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.561653   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.561690   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.561625   33749 retry.go:31] will retry after 351.190965ms: waiting for machine to come up
	I0224 01:24:53.914119   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:53.914752   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:53.914777   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:53.914685   33749 retry.go:31] will retry after 404.394392ms: waiting for machine to come up
	I0224 01:24:54.320168   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:54.320688   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:54.320711   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:54.320647   33749 retry.go:31] will retry after 695.572509ms: waiting for machine to come up
	I0224 01:24:54.309888   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:56.309957   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:53.303974   33541 docker.go:594] Took 1.735368 seconds to copy over tarball
	I0224 01:24:53.304030   33541 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0224 01:24:56.117267   33541 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.81321656s)
	I0224 01:24:56.117281   33541 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0224 01:24:56.157910   33541 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0224 01:24:56.168959   33541 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0224 01:24:56.186568   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:24:56.292108   33541 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:24:55.018242   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:55.018802   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:55.018827   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:55.018728   33749 retry.go:31] will retry after 583.578143ms: waiting for machine to come up
	I0224 01:24:55.603573   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:55.603994   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:55.604017   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:55.603931   33749 retry.go:31] will retry after 1.080429432s: waiting for machine to come up
	I0224 01:24:56.685975   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:56.686364   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:56.686387   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:56.686319   33749 retry.go:31] will retry after 1.235678304s: waiting for machine to come up
	I0224 01:24:57.923073   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:57.923491   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:57.923521   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:57.923421   33749 retry.go:31] will retry after 1.86022193s: waiting for machine to come up
	I0224 01:24:58.784183   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:25:00.810267   33267 pod_ready.go:102] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"False"
	I0224 01:24:59.443731   33541 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.151599008s)
	I0224 01:24:59.443757   33541 start.go:485] detecting cgroup driver to use...
	I0224 01:24:59.443855   33541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:24:59.460466   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 01:24:59.470976   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 01:24:59.480789   33541 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 01:24:59.480833   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 01:24:59.490754   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:24:59.499566   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 01:24:59.508496   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 01:24:59.517278   33541 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 01:24:59.526736   33541 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 01:24:59.535557   33541 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 01:24:59.544066   33541 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 01:24:59.552239   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:24:59.659255   33541 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 01:24:59.678215   33541 start.go:485] detecting cgroup driver to use...
	I0224 01:24:59.678279   33541 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 01:24:59.691790   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:24:59.703791   33541 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0224 01:24:59.719101   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0224 01:24:59.732066   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:24:59.742952   33541 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0224 01:24:59.770868   33541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 01:24:59.783768   33541 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 01:24:59.803437   33541 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 01:24:59.932318   33541 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 01:25:00.069995   33541 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 01:25:00.070035   33541 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 01:25:00.087933   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:25:00.189178   33541 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 01:25:01.551351   33541 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.362143972s)
	I0224 01:25:01.551412   33541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:25:01.677931   33541 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 01:25:01.798231   33541 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 01:25:01.935746   33541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 01:25:02.046836   33541 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 01:25:02.067093   33541 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 01:25:02.067162   33541 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 01:25:02.074272   33541 start.go:553] Will wait 60s for crictl version
	I0224 01:25:02.074326   33541 ssh_runner.go:195] Run: which crictl
	I0224 01:25:02.078094   33541 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 01:25:02.208613   33541 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0224 01:25:02.208669   33541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:25:02.248207   33541 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 01:25:02.309074   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.309106   33267 pod_ready.go:81] duration metric: took 14.522040109s waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.309120   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315314   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.315341   33267 pod_ready.go:81] duration metric: took 6.212713ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.315353   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319620   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.319637   33267 pod_ready.go:81] duration metric: took 4.27604ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.319647   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326775   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.326829   33267 pod_ready.go:81] duration metric: took 7.173792ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.326852   33267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333414   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:02.333431   33267 pod_ready.go:81] duration metric: took 6.567454ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:02.333439   33267 pod_ready.go:38] duration metric: took 14.559067346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.333460   33267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 01:25:02.355296   33267 ops.go:34] apiserver oom_adj: -16
	I0224 01:25:02.355314   33267 kubeadm.go:637] restartCluster took 57.777507228s
	I0224 01:25:02.355326   33267 kubeadm.go:403] StartCluster complete in 57.85076012s
	I0224 01:25:02.355347   33267 settings.go:142] acquiring lock: {Name:mk174257a2297336a9e6f80080faa7ef819759a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.355426   33267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 01:25:02.356623   33267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/kubeconfig: {Name:mk7a14c2c6ccf91ba70e9a5ad74574ac5676cf63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.356886   33267 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 01:25:02.357022   33267 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0224 01:25:02.357120   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.357180   33267 cache.go:107] acquiring lock: {Name:mk652b3b8459ff39d515b47d5e4228842d267921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 01:25:02.359382   33267 out.go:177] * Enabled addons: 
	I0224 01:25:02.357243   33267 cache.go:115] /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0224 01:25:02.357743   33267 kapi.go:59] client config for pause-966618: &rest.Config{Host:"https://192.168.50.59:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/profiles/pause-966618/client.key", CAFile:"/home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 01:25:02.360819   33267 addons.go:492] enable addons completed in 3.794353ms: enabled=[]
	I0224 01:25:02.360843   33267 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 3.665817ms
	I0224 01:25:02.360860   33267 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/15909-4074/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0224 01:25:02.360870   33267 cache.go:87] Successfully saved all images to host disk.
	I0224 01:25:02.361076   33267 config.go:182] Loaded profile config "pause-966618": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:25:02.361456   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.361507   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.364964   33267 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-966618" context rescaled to 1 replicas
	I0224 01:25:02.365000   33267 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.59 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 01:25:02.367473   33267 out.go:177] * Verifying Kubernetes components...
	I0224 01:25:02.296415   33541 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0224 01:25:02.296532   33541 main.go:141] libmachine: (NoKubernetes-394034) Calling .GetIP
	I0224 01:25:02.300095   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:25:02.300499   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9f:50:8f", ip: ""} in network mk-NoKubernetes-394034: {Iface:virbr3 ExpiryTime:2023-02-24 02:24:37 +0000 UTC Type:0 Mac:52:54:00:9f:50:8f Iaid: IPaddr:192.168.61.116 Prefix:24 Hostname:NoKubernetes-394034 Clientid:01:52:54:00:9f:50:8f}
	I0224 01:25:02.300525   33541 main.go:141] libmachine: (NoKubernetes-394034) DBG | domain NoKubernetes-394034 has defined IP address 192.168.61.116 and MAC address 52:54:00:9f:50:8f in network mk-NoKubernetes-394034
	I0224 01:25:02.300770   33541 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0224 01:25:02.306219   33541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:25:02.322208   33541 localpath.go:92] copying /home/jenkins/minikube-integration/15909-4074/.minikube/client.crt -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.crt
	I0224 01:25:02.322350   33541 localpath.go:117] copying /home/jenkins/minikube-integration/15909-4074/.minikube/client.key -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.key
	I0224 01:25:02.322483   33541 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 01:25:02.322550   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.352776   33541 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.352790   33541 docker.go:560] Images already preloaded, skipping extraction
	I0224 01:25:02.352852   33541 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.395175   33541 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.395187   33541 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:25:02.395226   33541 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 01:25:02.434875   33541 cni.go:84] Creating CNI manager for ""
	I0224 01:25:02.434889   33541 cni.go:157] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 01:25:02.434905   33541 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 01:25:02.434919   33541 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.116 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-394034 NodeName:NoKubernetes-394034 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 01:25:02.435064   33541 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "NoKubernetes-394034"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 01:25:02.435148   33541 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=NoKubernetes-394034 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:NoKubernetes-394034 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 01:25:02.435187   33541 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 01:25:02.447004   33541 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 01:25:02.447063   33541 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 01:25:02.458141   33541 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (453 bytes)
	I0224 01:25:02.475923   33541 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 01:25:02.494749   33541 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0224 01:25:02.514183   33541 ssh_runner.go:195] Run: grep 192.168.61.116	control-plane.minikube.internal$ /etc/hosts
	I0224 01:25:02.518247   33541 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 01:25:02.533651   33541 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034 for IP: 192.168.61.116
	I0224 01:25:02.533674   33541 certs.go:186] acquiring lock for shared ca certs: {Name:mk0c9037d1d3974a6bc5ba375ef4804966dba284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.533844   33541 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key
	I0224 01:25:02.533899   33541 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key
	I0224 01:25:02.534008   33541 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/client.key
	I0224 01:25:02.534030   33541 certs.go:315] generating minikube signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b
	I0224 01:25:02.534042   33541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b with IP's: [192.168.61.116 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 01:25:02.639355   33541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b ...
	I0224 01:25:02.639374   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b: {Name:mk204b8f0fff62839ad3dfce86026dcc237148df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.639590   33541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b ...
	I0224 01:25:02.639599   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b: {Name:mkabcccd3f286ea677f26b21126891a56772d043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.639734   33541 certs.go:333] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt.6062d48b -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt
	I0224 01:25:02.639849   33541 certs.go:337] copying /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key.6062d48b -> /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key
	I0224 01:25:02.639930   33541 certs.go:315] generating aggregator signed cert: /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.key
	I0224 01:25:02.639949   33541 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.crt with IP's: []
	I0224 01:24:59.786519   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:24:59.787027   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:24:59.787043   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:24:59.786977   33749 retry.go:31] will retry after 1.638205996s: waiting for machine to come up
	I0224 01:25:01.426608   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:25:01.427220   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:25:01.427244   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:25:01.427100   33749 retry.go:31] will retry after 2.195506564s: waiting for machine to come up
	I0224 01:25:03.624340   33676 main.go:141] libmachine: (gvisor-694714) DBG | domain gvisor-694714 has defined MAC address 52:54:00:ea:c0:87 in network mk-gvisor-694714
	I0224 01:25:03.624858   33676 main.go:141] libmachine: (gvisor-694714) DBG | unable to find current IP address of domain gvisor-694714 in network mk-gvisor-694714
	I0224 01:25:03.624880   33676 main.go:141] libmachine: (gvisor-694714) DBG | I0224 01:25:03.624801   33749 retry.go:31] will retry after 3.478404418s: waiting for machine to come up
	I0224 01:25:02.368794   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:02.382200   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37331
	I0224 01:25:02.382731   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.383282   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.383305   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.383650   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.383801   33267 main.go:141] libmachine: (pause-966618) Calling .GetState
	I0224 01:25:02.386097   33267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:25:02.386152   33267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:25:02.407245   33267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I0224 01:25:02.409582   33267 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:25:02.410204   33267 main.go:141] libmachine: Using API Version  1
	I0224 01:25:02.410229   33267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:25:02.410659   33267 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:25:02.410858   33267 main.go:141] libmachine: (pause-966618) Calling .DriverName
	I0224 01:25:02.411069   33267 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 01:25:02.411100   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHHostname
	I0224 01:25:02.415890   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.415936   33267 main.go:141] libmachine: (pause-966618) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:4c:50", ip: ""} in network mk-pause-966618: {Iface:virbr2 ExpiryTime:2023-02-24 02:22:50 +0000 UTC Type:0 Mac:52:54:00:6b:4c:50 Iaid: IPaddr:192.168.50.59 Prefix:24 Hostname:pause-966618 Clientid:01:52:54:00:6b:4c:50}
	I0224 01:25:02.415959   33267 main.go:141] libmachine: (pause-966618) DBG | domain pause-966618 has defined IP address 192.168.50.59 and MAC address 52:54:00:6b:4c:50 in network mk-pause-966618
	I0224 01:25:02.416047   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHPort
	I0224 01:25:02.416230   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHKeyPath
	I0224 01:25:02.416416   33267 main.go:141] libmachine: (pause-966618) Calling .GetSSHUsername
	I0224 01:25:02.416697   33267 sshutil.go:53] new ssh client: &{IP:192.168.50.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/pause-966618/id_rsa Username:docker}
	I0224 01:25:02.581908   33267 node_ready.go:35] waiting up to 6m0s for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.582172   33267 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0224 01:25:02.585550   33267 node_ready.go:49] node "pause-966618" has status "Ready":"True"
	I0224 01:25:02.585574   33267 node_ready.go:38] duration metric: took 3.634014ms waiting for node "pause-966618" to be "Ready" ...
	I0224 01:25:02.585585   33267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:02.601775   33267 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 01:25:02.601800   33267 cache_images.go:84] Images are preloaded, skipping loading
	I0224 01:25:02.601807   33267 cache_images.go:262] succeeded pushing to: pause-966618
	I0224 01:25:02.601812   33267 cache_images.go:263] failed pushing to: 
	I0224 01:25:02.601832   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.601843   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602141   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602162   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.602171   33267 main.go:141] libmachine: Making call to close driver server
	I0224 01:25:02.602180   33267 main.go:141] libmachine: (pause-966618) Calling .Close
	I0224 01:25:02.602914   33267 main.go:141] libmachine: Successfully made call to close driver server
	I0224 01:25:02.602930   33267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0224 01:25:02.711320   33267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117011   33267 pod_ready.go:92] pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.117050   33267 pod_ready.go:81] duration metric: took 405.70699ms waiting for pod "coredns-787d4945fb-5kk6f" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.117065   33267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508668   33267 pod_ready.go:92] pod "etcd-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.508692   33267 pod_ready.go:81] duration metric: took 391.619335ms waiting for pod "etcd-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.508705   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907114   33267 pod_ready.go:92] pod "kube-apiserver-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:03.907137   33267 pod_ready.go:81] duration metric: took 398.424421ms waiting for pod "kube-apiserver-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:03.907149   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306213   33267 pod_ready.go:92] pod "kube-controller-manager-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.306238   33267 pod_ready.go:81] duration metric: took 399.079189ms waiting for pod "kube-controller-manager-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.306250   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706570   33267 pod_ready.go:92] pod "kube-proxy-7wlbf" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:04.706595   33267 pod_ready.go:81] duration metric: took 400.337777ms waiting for pod "kube-proxy-7wlbf" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:04.706608   33267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107315   33267 pod_ready.go:92] pod "kube-scheduler-pause-966618" in "kube-system" namespace has status "Ready":"True"
	I0224 01:25:05.107347   33267 pod_ready.go:81] duration metric: took 400.730489ms waiting for pod "kube-scheduler-pause-966618" in "kube-system" namespace to be "Ready" ...
	I0224 01:25:05.107364   33267 pod_ready.go:38] duration metric: took 2.521766254s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 01:25:05.107389   33267 api_server.go:51] waiting for apiserver process to appear ...
	I0224 01:25:05.107436   33267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:25:05.128123   33267 api_server.go:71] duration metric: took 2.763089316s to wait for apiserver process to appear ...
	I0224 01:25:05.128163   33267 api_server.go:87] waiting for apiserver healthz status ...
	I0224 01:25:05.128177   33267 api_server.go:252] Checking apiserver healthz at https://192.168.50.59:8443/healthz ...
	I0224 01:25:05.135652   33267 api_server.go:278] https://192.168.50.59:8443/healthz returned 200:
	ok
	I0224 01:25:05.136519   33267 api_server.go:140] control plane version: v1.26.1
	I0224 01:25:05.136537   33267 api_server.go:130] duration metric: took 8.36697ms to wait for apiserver health ...
	I0224 01:25:05.136546   33267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 01:25:05.309967   33267 system_pods.go:59] 6 kube-system pods found
	I0224 01:25:05.310003   33267 system_pods.go:61] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.310011   33267 system_pods.go:61] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.310018   33267 system_pods.go:61] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.310024   33267 system_pods.go:61] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.310030   33267 system_pods.go:61] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.310038   33267 system_pods.go:61] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.310045   33267 system_pods.go:74] duration metric: took 173.493465ms to wait for pod list to return data ...
	I0224 01:25:05.310057   33267 default_sa.go:34] waiting for default service account to be created ...
	I0224 01:25:05.506994   33267 default_sa.go:45] found service account: "default"
	I0224 01:25:05.507025   33267 default_sa.go:55] duration metric: took 196.958905ms for default service account to be created ...
	I0224 01:25:05.507048   33267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 01:25:05.709913   33267 system_pods.go:86] 6 kube-system pods found
	I0224 01:25:05.709939   33267 system_pods.go:89] "coredns-787d4945fb-5kk6f" [864031ca-0190-46a6-9191-bed0ab15761f] Running
	I0224 01:25:05.709946   33267 system_pods.go:89] "etcd-pause-966618" [cd22a134-0381-429c-b9b1-9cc9c2130730] Running
	I0224 01:25:05.709953   33267 system_pods.go:89] "kube-apiserver-pause-966618" [152ed33c-514e-4289-a994-58e7d466b19d] Running
	I0224 01:25:05.709960   33267 system_pods.go:89] "kube-controller-manager-pause-966618" [e87845de-aa91-4c77-9ece-00268d888b81] Running
	I0224 01:25:05.709966   33267 system_pods.go:89] "kube-proxy-7wlbf" [98036b9d-4a03-4d42-9f71-28b8df888be5] Running
	I0224 01:25:05.709972   33267 system_pods.go:89] "kube-scheduler-pause-966618" [c6381154-d98c-4778-886c-6390c12c324e] Running
	I0224 01:25:05.709981   33267 system_pods.go:126] duration metric: took 202.927244ms to wait for k8s-apps to be running ...
	I0224 01:25:05.709993   33267 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 01:25:05.710044   33267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:25:05.724049   33267 system_svc.go:56] duration metric: took 14.047173ms WaitForService to wait for kubelet.
	I0224 01:25:05.724072   33267 kubeadm.go:578] duration metric: took 3.359049103s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 01:25:05.724093   33267 node_conditions.go:102] verifying NodePressure condition ...
	I0224 01:25:05.908529   33267 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0224 01:25:05.908554   33267 node_conditions.go:123] node cpu capacity is 2
	I0224 01:25:05.908564   33267 node_conditions.go:105] duration metric: took 184.464952ms to run NodePressure ...
	I0224 01:25:05.908574   33267 start.go:228] waiting for startup goroutines ...
	I0224 01:25:05.908580   33267 start.go:233] waiting for cluster config update ...
	I0224 01:25:05.908587   33267 start.go:242] writing updated cluster config ...
	I0224 01:25:05.908913   33267 ssh_runner.go:195] Run: rm -f paused
	I0224 01:25:05.962958   33267 start.go:555] kubectl: 1.26.1, cluster: 1.26.1 (minor skew: 0)
	I0224 01:25:05.966906   33267 out.go:177] * Done! kubectl is now configured to use "pause-966618" cluster and "default" namespace by default
	I0224 01:25:02.829162   33541 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.crt ...
	I0224 01:25:02.829175   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.crt: {Name:mk5755565e9159e1bb40e6dd24502216ea2949a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.829348   33541 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.key ...
	I0224 01:25:02.829354   33541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.key: {Name:mke284c95919fb74c0c8b66296dd9323b55ff5b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 01:25:02.829538   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem (1338 bytes)
	W0224 01:25:02.829567   33541 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131_empty.pem, impossibly tiny 0 bytes
	I0224 01:25:02.829573   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 01:25:02.829593   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/ca.pem (1078 bytes)
	I0224 01:25:02.829612   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/cert.pem (1123 bytes)
	I0224 01:25:02.829629   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/certs/home/jenkins/minikube-integration/15909-4074/.minikube/certs/key.pem (1679 bytes)
	I0224 01:25:02.829660   33541 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem (1708 bytes)
	I0224 01:25:02.830115   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 01:25:02.856126   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 01:25:02.880444   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 01:25:02.905458   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/NoKubernetes-394034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 01:25:02.930463   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 01:25:02.957941   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0224 01:25:02.984374   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 01:25:03.010955   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0224 01:25:03.037112   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/certs/11131.pem --> /usr/share/ca-certificates/11131.pem (1338 bytes)
	I0224 01:25:03.063251   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/ssl/certs/111312.pem --> /usr/share/ca-certificates/111312.pem (1708 bytes)
	I0224 01:25:03.090030   33541 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-4074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 01:25:03.119266   33541 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 01:25:03.136587   33541 ssh_runner.go:195] Run: openssl version
	I0224 01:25:03.142027   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111312.pem && ln -fs /usr/share/ca-certificates/111312.pem /etc/ssl/certs/111312.pem"
	I0224 01:25:03.152240   33541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111312.pem
	I0224 01:25:03.156801   33541 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 00:46 /usr/share/ca-certificates/111312.pem
	I0224 01:25:03.156845   33541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111312.pem
	I0224 01:25:03.162515   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111312.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 01:25:03.172748   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 01:25:03.182638   33541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:25:03.186979   33541 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 00:41 /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:25:03.187011   33541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 01:25:03.192414   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 01:25:03.202151   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11131.pem && ln -fs /usr/share/ca-certificates/11131.pem /etc/ssl/certs/11131.pem"
	I0224 01:25:03.212270   33541 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11131.pem
	I0224 01:25:03.216821   33541 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 00:46 /usr/share/ca-certificates/11131.pem
	I0224 01:25:03.216847   33541 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11131.pem
	I0224 01:25:03.222816   33541 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11131.pem /etc/ssl/certs/51391683.0"
	I0224 01:25:03.233241   33541 kubeadm.go:401] StartCluster: {Name:NoKubernetes-394034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.26.1 ClusterName:NoKubernetes-394034 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.116 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 01:25:03.233328   33541 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 01:25:03.256838   33541 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 01:25:03.266511   33541 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 01:25:03.276829   33541 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 01:25:03.287016   33541 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 01:25:03.287045   33541 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0224 01:25:03.336859   33541 kubeadm.go:322] W0224 01:25:03.332252    1312 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 01:25:03.494465   33541 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-02-24 01:22:46 UTC, ends at Fri 2023-02-24 01:25:08 UTC. --
	Feb 24 01:24:39 pause-966618 dockerd[4255]: time="2023-02-24T01:24:39.047002565Z" level=warning msg="cleanup warnings time=\"2023-02-24T01:24:39Z\" level=info msg=\"starting signal loop\" namespace=moby pid=6660 runtime=io.containerd.runc.v2\n"
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598084033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598197087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598219760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.598341706Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/409f89a75e66e33968519196a807970098f00f95c635c62db6ebd81afa67ade8 pid=6909 runtime=io.containerd.runc.v2
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612333627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612401915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612413121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.612664644Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4aabdcdde9cf7192024b2d9eb3bc6f2abcb4a59ef91285c8e6682b73e6cc8431 pid=6936 runtime=io.containerd.runc.v2
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616476185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616746230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.616900206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:41 pause-966618 dockerd[4255]: time="2023-02-24T01:24:41.619110954Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/201f4222b6ec37ff5bee9fc1edbf2ef9ca1b1a7350a98b9c64e62107eaff46ac pid=6939 runtime=io.containerd.runc.v2
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248316467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248446314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.248455765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.249046552Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/72c6739efe2b9161a1c69abbcab9fc1b036dbeffdf34fa80ad551d801f4490eb pid=7129 runtime=io.containerd.runc.v2
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.543980021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544198182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544359528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:47 pause-966618 dockerd[4255]: time="2023-02-24T01:24:47.544813474Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ec57ffc07efea5c80ab62903046ca28a77fad1b2473a6794d0858b9c45b9188f pid=7177 runtime=io.containerd.runc.v2
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119108515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119252738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119282884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 24 01:24:48 pause-966618 dockerd[4255]: time="2023-02-24T01:24:48.119840619Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8338976709e5eb7d94bc2e0aadeff1333eb52459e09462452d1b3489ff4fecdb pid=7330 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8338976709e5e       5185b96f0becf       20 seconds ago       Running             coredns                   2                   72c6739efe2b9
	ec57ffc07efea       46a6bb3c77ce0       21 seconds ago       Running             kube-proxy                2                   4e083861b8719
	4aabdcdde9cf7       655493523f607       27 seconds ago       Running             kube-scheduler            3                   573e5c961fdb1
	201f4222b6ec3       e9c08e11b07f6       27 seconds ago       Running             kube-controller-manager   3                   d8187f234e578
	409f89a75e66e       fce326961ae2d       27 seconds ago       Running             etcd                      3                   6bd59d955c73d
	7fd1d7e9bc798       deb04688c4a35       32 seconds ago       Running             kube-apiserver            2                   b62e8b9c65056
	4fbc20ae5bf7c       fce326961ae2d       47 seconds ago       Exited              etcd                      2                   dfb4d0df0765f
	ad958c9f692bf       e9c08e11b07f6       47 seconds ago       Exited              kube-controller-manager   2                   d6f702db3bf6b
	e8da938203fc1       655493523f607       50 seconds ago       Exited              kube-scheduler            2                   eebafd5971ced
	426996151fbc3       46a6bb3c77ce0       59 seconds ago       Exited              kube-proxy                1                   29450e6f7ba68
	fac1cbcee4196       5185b96f0becf       About a minute ago   Exited              coredns                   1                   4ce5be8f6eb9d
	f3b13c7b26554       deb04688c4a35       About a minute ago   Exited              kube-apiserver            1                   de777fb1d6b37
	
	* 
	* ==> coredns [8338976709e5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:37882 - 55295 "HINFO IN 7038850563798999389.5597014617150123683. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019122525s
	
	* 
	* ==> coredns [fac1cbcee419] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:52393 - 59882 "HINFO IN 7982410992457572593.3493053231117998011. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021473294s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:44406->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-966618
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-966618
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c13299ce0b45f38f7f45d3bc31124c3ea59c0510
	                    minikube.k8s.io/name=pause-966618
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T01_23_27_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 01:23:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-966618
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 01:25:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 01:24:45 +0000   Fri, 24 Feb 2023 01:23:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.59
	  Hostname:    pause-966618
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a7569a111714f25b93fea6b50ae934a
	  System UUID:                6a7569a1-1171-4f25-b93f-ea6b50ae934a
	  Boot ID:                    d4ac2b4a-5dc3-4f7c-8fb2-1c7afd60407d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-787d4945fb-5kk6f                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     89s
	  kube-system                 etcd-pause-966618                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         101s
	  kube-system                 kube-apiserver-pause-966618             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-controller-manager-pause-966618    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-7wlbf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-966618             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node pause-966618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node pause-966618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s               kubelet          Node pause-966618 status is now: NodeHasSufficientPID
	  Normal  Starting                 101s               kubelet          Starting kubelet.
	  Normal  NodeReady                100s               kubelet          Node pause-966618 status is now: NodeReady
	  Normal  RegisteredNode           90s                node-controller  Node pause-966618 event: Registered Node pause-966618 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-966618 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-966618 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-966618 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-966618 event: Registered Node pause-966618 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.389175] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +0.297463] systemd-fstab-generator[933]: Ignoring "noauto" for root device
	[  +0.111266] systemd-fstab-generator[944]: Ignoring "noauto" for root device
	[  +0.130662] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +1.531108] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
	[  +0.123090] systemd-fstab-generator[1116]: Ignoring "noauto" for root device
	[  +0.147358] systemd-fstab-generator[1127]: Ignoring "noauto" for root device
	[  +0.128731] systemd-fstab-generator[1138]: Ignoring "noauto" for root device
	[  +4.266481] systemd-fstab-generator[1388]: Ignoring "noauto" for root device
	[  +0.438156] kauditd_printk_skb: 68 callbacks suppressed
	[ +13.834461] systemd-fstab-generator[2130]: Ignoring "noauto" for root device
	[ +14.083667] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.636362] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.758826] systemd-fstab-generator[3528]: Ignoring "noauto" for root device
	[  +0.300612] systemd-fstab-generator[3567]: Ignoring "noauto" for root device
	[  +0.165547] systemd-fstab-generator[3578]: Ignoring "noauto" for root device
	[  +0.163297] systemd-fstab-generator[3591]: Ignoring "noauto" for root device
	[  +5.255153] kauditd_printk_skb: 2 callbacks suppressed
	[Feb24 01:24] systemd-fstab-generator[4603]: Ignoring "noauto" for root device
	[  +0.143453] systemd-fstab-generator[4646]: Ignoring "noauto" for root device
	[  +0.172343] systemd-fstab-generator[4689]: Ignoring "noauto" for root device
	[  +0.161512] systemd-fstab-generator[4710]: Ignoring "noauto" for root device
	[  +2.283866] kauditd_printk_skb: 38 callbacks suppressed
	[ +23.051873] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.853107] systemd-fstab-generator[6743]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [409f89a75e66] <==
	* {"level":"warn","ts":"2023-02-24T01:24:58.779Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"396.147992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-02-24T01:24:58.780Z","caller":"traceutil/trace.go:171","msg":"trace[2068455327] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:448; }","duration":"397.749335ms","start":"2023-02-24T01:24:58.383Z","end":"2023-02-24T01:24:58.780Z","steps":["trace[2068455327] 'agreement among raft nodes before linearized reading'  (duration: 396.111217ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:58.781Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"311.900642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-02-24T01:24:58.781Z","caller":"traceutil/trace.go:171","msg":"trace[172381515] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:448; }","duration":"315.004084ms","start":"2023-02-24T01:24:58.466Z","end":"2023-02-24T01:24:58.781Z","steps":["trace[172381515] 'agreement among raft nodes before linearized reading'  (duration: 311.832922ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:58.781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.466Z","time spent":"315.044393ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":236,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"warn","ts":"2023-02-24T01:24:58.782Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.383Z","time spent":"398.051633ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"info","ts":"2023-02-24T01:24:59.095Z","caller":"traceutil/trace.go:171","msg":"trace[924256197] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"306.110554ms","start":"2023-02-24T01:24:58.789Z","end":"2023-02-24T01:24:59.095Z","steps":["trace[924256197] 'read index received'  (duration: 223.1288ms)","trace[924256197] 'applied index is now lower than readState.Index'  (duration: 82.981039ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-24T01:24:59.095Z","caller":"traceutil/trace.go:171","msg":"trace[351778279] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"308.37778ms","start":"2023-02-24T01:24:58.787Z","end":"2023-02-24T01:24:59.095Z","steps":["trace[351778279] 'process raft request'  (duration: 225.367671ms)","trace[351778279] 'compare'  (duration: 82.48645ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.787Z","time spent":"308.44389ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6210,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" mod_revision:447 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" value_size:6139 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-966618\" > >"}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"306.35986ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-02-24T01:24:59.096Z","caller":"traceutil/trace.go:171","msg":"trace[530318559] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:449; }","duration":"306.400764ms","start":"2023-02-24T01:24:58.789Z","end":"2023-02-24T01:24:59.096Z","steps":["trace[530318559] 'agreement among raft nodes before linearized reading'  (duration: 306.297962ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.096Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.789Z","time spent":"306.435241ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":1,"response size":236,"request content":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" "}
	{"level":"info","ts":"2023-02-24T01:24:59.218Z","caller":"traceutil/trace.go:171","msg":"trace[1957925680] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"115.525478ms","start":"2023-02-24T01:24:59.103Z","end":"2023-02-24T01:24:59.218Z","steps":["trace[1957925680] 'process raft request'  (duration: 115.468773ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T01:24:59.219Z","caller":"traceutil/trace.go:171","msg":"trace[777955538] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"428.473795ms","start":"2023-02-24T01:24:58.791Z","end":"2023-02-24T01:24:59.219Z","steps":["trace[777955538] 'process raft request'  (duration: 422.506505ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.220Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.791Z","time spent":"428.574339ms","remote":"127.0.0.1:49802","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":785,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:359 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:728 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2023-02-24T01:24:59.220Z","caller":"traceutil/trace.go:171","msg":"trace[1611141453] linearizableReadLoop","detail":"{readStateIndex:489; appliedIndex:488; }","duration":"124.607146ms","start":"2023-02-24T01:24:59.095Z","end":"2023-02-24T01:24:59.220Z","steps":["trace[1611141453] 'read index received'  (duration: 117.686257ms)","trace[1611141453] 'applied index is now lower than readState.Index'  (duration: 6.919668ms)"],"step_count":2}
	{"level":"warn","ts":"2023-02-24T01:24:59.221Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"430.380087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2023-02-24T01:24:59.221Z","caller":"traceutil/trace.go:171","msg":"trace[1153661980] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:451; }","duration":"430.670167ms","start":"2023-02-24T01:24:58.790Z","end":"2023-02-24T01:24:59.221Z","steps":["trace[1153661980] 'agreement among raft nodes before linearized reading'  (duration: 429.639975ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.790Z","time spent":"430.712799ms","remote":"127.0.0.1:49808","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":229,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"warn","ts":"2023-02-24T01:24:59.222Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"333.565172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4011"}
	{"level":"info","ts":"2023-02-24T01:24:59.226Z","caller":"traceutil/trace.go:171","msg":"trace[1371051409] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:451; }","duration":"338.29311ms","start":"2023-02-24T01:24:58.888Z","end":"2023-02-24T01:24:59.226Z","steps":["trace[1371051409] 'agreement among raft nodes before linearized reading'  (duration: 333.512656ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.227Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.888Z","time spent":"338.449423ms","remote":"127.0.0.1:49878","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4033,"request content":"key:\"/registry/deployments/kube-system/coredns\" "}
	{"level":"warn","ts":"2023-02-24T01:24:59.222Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"421.117462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-966618\" ","response":"range_response_count:1 size:5470"}
	{"level":"info","ts":"2023-02-24T01:24:59.227Z","caller":"traceutil/trace.go:171","msg":"trace[1267808777] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-966618; range_end:; response_count:1; response_revision:451; }","duration":"426.410685ms","start":"2023-02-24T01:24:58.801Z","end":"2023-02-24T01:24:59.227Z","steps":["trace[1267808777] 'agreement among raft nodes before linearized reading'  (duration: 421.051096ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-24T01:24:59.227Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-24T01:24:58.801Z","time spent":"426.593622ms","remote":"127.0.0.1:49806","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5492,"request content":"key:\"/registry/pods/kube-system/etcd-pause-966618\" "}
	
	* 
	* ==> etcd [4fbc20ae5bf7] <==
	* {"level":"info","ts":"2023-02-24T01:24:22.398Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"602de89049e69a5d","initial-advertise-peer-urls":["https://192.168.50.59:2380"],"listen-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.59:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:22.399Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:22.477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d received MsgPreVoteResp from 602de89049e69a5d at term 3"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became candidate at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d received MsgVoteResp from 602de89049e69a5d at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602de89049e69a5d became leader at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602de89049e69a5d elected leader 602de89049e69a5d at term 4"}
	{"level":"info","ts":"2023-02-24T01:24:22.484Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"602de89049e69a5d","local-member-attributes":"{Name:pause-966618 ClientURLs:[https://192.168.50.59:2379]}","request-path":"/0/members/602de89049e69a5d/attributes","cluster-id":"47ab5ca4b9a8bf42","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T01:24:22.484Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:24:22.486Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.50.59:2379"}
	{"level":"info","ts":"2023-02-24T01:24:22.486Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T01:24:22.488Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T01:24:22.498Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T01:24:22.498Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T01:24:33.984Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-24T01:24:33.984Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-966618","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	{"level":"info","ts":"2023-02-24T01:24:33.988Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602de89049e69a5d","current-leader-member-id":"602de89049e69a5d"}
	{"level":"info","ts":"2023-02-24T01:24:33.991Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:33.993Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.50.59:2380"}
	{"level":"info","ts":"2023-02-24T01:24:33.993Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-966618","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.59:2380"],"advertise-client-urls":["https://192.168.50.59:2379"]}
	
	* 
	* ==> kernel <==
	*  01:25:08 up 2 min,  0 users,  load average: 1.18, 0.65, 0.25
	Linux pause-966618 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7fd1d7e9bc79] <==
	* I0224 01:24:45.277420       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0224 01:24:45.277438       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0224 01:24:45.277452       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0224 01:24:45.282226       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0224 01:24:45.282272       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0224 01:24:45.393051       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0224 01:24:45.406590       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0224 01:24:45.431607       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0224 01:24:45.462851       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 01:24:45.464389       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 01:24:45.464884       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 01:24:45.465041       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 01:24:45.477336       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 01:24:45.477430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 01:24:45.482373       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 01:24:45.487609       1 cache.go:39] Caches are synced for autoregister controller
	I0224 01:24:46.017922       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 01:24:46.283818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 01:24:47.060948       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 01:24:47.075700       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 01:24:47.148726       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 01:24:47.186879       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 01:24:47.205326       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 01:24:58.789355       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 01:24:59.102212       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [f3b13c7b2655] <==
	* W0224 01:24:16.941998       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:24:17.122908       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 01:24:20.774303       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0224 01:24:25.948672       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [201f4222b6ec] <==
	* I0224 01:24:58.335249       1 shared_informer.go:280] Caches are synced for ReplicationController
	I0224 01:24:58.335299       1 shared_informer.go:280] Caches are synced for cronjob
	I0224 01:24:58.335334       1 shared_informer.go:280] Caches are synced for ephemeral
	I0224 01:24:58.336478       1 shared_informer.go:280] Caches are synced for disruption
	I0224 01:24:58.340321       1 shared_informer.go:280] Caches are synced for PV protection
	I0224 01:24:58.345378       1 shared_informer.go:280] Caches are synced for stateful set
	I0224 01:24:58.349859       1 shared_informer.go:280] Caches are synced for node
	I0224 01:24:58.349914       1 range_allocator.go:167] Sending events to api server.
	I0224 01:24:58.349929       1 range_allocator.go:171] Starting range CIDR allocator
	I0224 01:24:58.349933       1 shared_informer.go:273] Waiting for caches to sync for cidrallocator
	I0224 01:24:58.349940       1 shared_informer.go:280] Caches are synced for cidrallocator
	I0224 01:24:58.351214       1 shared_informer.go:280] Caches are synced for namespace
	I0224 01:24:58.351291       1 shared_informer.go:280] Caches are synced for certificate-csrapproving
	I0224 01:24:58.354562       1 shared_informer.go:280] Caches are synced for service account
	I0224 01:24:58.358699       1 shared_informer.go:280] Caches are synced for job
	I0224 01:24:58.358967       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0224 01:24:58.369950       1 shared_informer.go:280] Caches are synced for deployment
	I0224 01:24:58.385823       1 shared_informer.go:280] Caches are synced for HPA
	I0224 01:24:58.428974       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0224 01:24:58.440564       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:24:58.463937       1 shared_informer.go:280] Caches are synced for endpoint_slice
	I0224 01:24:58.510160       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 01:24:58.884872       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:24:58.886233       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 01:24:58.886394       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ad958c9f692b] <==
	* I0224 01:24:22.903296       1 serving.go:348] Generated self-signed cert in-memory
	I0224 01:24:23.288434       1 controllermanager.go:182] Version: v1.26.1
	I0224 01:24:23.288645       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:23.289776       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0224 01:24:23.289998       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0224 01:24:23.290033       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0224 01:24:23.290299       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [426996151fbc] <==
	* E0224 01:24:19.236934       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": net/http: TLS handshake timeout
	E0224 01:24:26.955373       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.59:52556->192.168.50.59:8443: read: connection reset by peer
	E0224 01:24:29.025061       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.214153       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-966618": dial tcp 192.168.50.59:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [ec57ffc07efe] <==
	* I0224 01:24:47.701077       1 node.go:163] Successfully retrieved node IP: 192.168.50.59
	I0224 01:24:47.701185       1 server_others.go:109] "Detected node IP" address="192.168.50.59"
	I0224 01:24:47.701244       1 server_others.go:535] "Using iptables proxy"
	I0224 01:24:47.754176       1 server_others.go:170] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0224 01:24:47.754223       1 server_others.go:176] "Using iptables Proxier"
	I0224 01:24:47.754265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 01:24:47.754668       1 server.go:655] "Version info" version="v1.26.1"
	I0224 01:24:47.754706       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:47.755405       1 config.go:317] "Starting service config controller"
	I0224 01:24:47.755464       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 01:24:47.755578       1 config.go:226] "Starting endpoint slice config controller"
	I0224 01:24:47.755611       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 01:24:47.756163       1 config.go:444] "Starting node config controller"
	I0224 01:24:47.756200       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 01:24:47.855698       1 shared_informer.go:280] Caches are synced for service config
	I0224 01:24:47.856103       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 01:24:47.857614       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4aabdcdde9cf] <==
	* I0224 01:24:42.721262       1 serving.go:348] Generated self-signed cert in-memory
	W0224 01:24:45.285027       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 01:24:45.285102       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 01:24:45.285116       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 01:24:45.285127       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 01:24:45.372919       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0224 01:24:45.372973       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 01:24:45.380469       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 01:24:45.381072       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:24:45.381829       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 01:24:45.384475       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 01:24:45.483024       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [e8da938203fc] <==
	* W0224 01:24:30.995321       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.50.59:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:30.995404       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.59:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.056334       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.50.59:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.056430       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.50.59:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.136416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.50.59:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.136565       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.59:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.334273       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.50.59:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.334372       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.59:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.335819       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.50.59:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.335924       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.59:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.367835       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.50.59:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.367912       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.50.59:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.417773       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.50.59:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.417833       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.59:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:31.556026       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:31.556401       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:33.420873       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.59:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.420987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.50.59:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	W0224 01:24:33.849309       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	E0224 01:24:33.849403       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.50.59:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.50.59:8443: connect: connection refused
	I0224 01:24:33.988126       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0224 01:24:33.988445       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 01:24:33.988551       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 01:24:33.988748       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0224 01:24:33.988912       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-02-24 01:22:46 UTC, ends at Fri 2023-02-24 01:25:09 UTC. --
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.459367    6749 scope.go:115] "RemoveContainer" containerID="4fbc20ae5bf7c11474da8994a88ff02eebe65f7997961567f7acd8a88f6b70e4"
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.483233    6749 scope.go:115] "RemoveContainer" containerID="ad958c9f692bfe0b733272d6b661b68c16ddf38e0d88c0bbaeac5d37efb4e6be"
	Feb 24 01:24:41 pause-966618 kubelet[6749]: I0224 01:24:41.494680    6749 scope.go:115] "RemoveContainer" containerID="e8da938203fc176f8393b15585f7d1a66c7833ee712276046b0abc360284ce20"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: E0224 01:24:45.455274    6749 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-966618\" already exists" pod="kube-system/kube-scheduler-pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.477059    6749 kubelet_node_status.go:108] "Node was previously registered" node="pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.477795    6749 kubelet_node_status.go:73] "Successfully registered node" node="pause-966618"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.488588    6749 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.489897    6749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.882635    6749 apiserver.go:52] "Watching apiserver"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.885364    6749 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.885462    6749 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.915998    6749 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.953848    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98036b9d-4a03-4d42-9f71-28b8df888be5-xtables-lock\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954076    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpqk\" (UniqueName: \"kubernetes.io/projected/864031ca-0190-46a6-9191-bed0ab15761f-kube-api-access-bzpqk\") pod \"coredns-787d4945fb-5kk6f\" (UID: \"864031ca-0190-46a6-9191-bed0ab15761f\") " pod="kube-system/coredns-787d4945fb-5kk6f"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954251    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/864031ca-0190-46a6-9191-bed0ab15761f-config-volume\") pod \"coredns-787d4945fb-5kk6f\" (UID: \"864031ca-0190-46a6-9191-bed0ab15761f\") " pod="kube-system/coredns-787d4945fb-5kk6f"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954425    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98036b9d-4a03-4d42-9f71-28b8df888be5-lib-modules\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954675    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvzk\" (UniqueName: \"kubernetes.io/projected/98036b9d-4a03-4d42-9f71-28b8df888be5-kube-api-access-xrvzk\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954776    6749 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98036b9d-4a03-4d42-9f71-28b8df888be5-kube-proxy\") pod \"kube-proxy-7wlbf\" (UID: \"98036b9d-4a03-4d42-9f71-28b8df888be5\") " pod="kube-system/kube-proxy-7wlbf"
	Feb 24 01:24:45 pause-966618 kubelet[6749]: I0224 01:24:45.954830    6749 reconciler.go:41] "Reconciler: start to sync state"
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.189353    6749 request.go:690] Waited for 1.131451949s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.386628    6749 scope.go:115] "RemoveContainer" containerID="426996151fbc3fd63eb00bba1849a621ea6f53c215166768e47eada4bbf6243f"
	Feb 24 01:24:47 pause-966618 kubelet[6749]: I0224 01:24:47.937765    6749 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c6739efe2b9161a1c69abbcab9fc1b036dbeffdf34fa80ad551d801f4490eb"
	Feb 24 01:24:49 pause-966618 kubelet[6749]: I0224 01:24:49.986140    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 01:24:50 pause-966618 kubelet[6749]: I0224 01:24:50.995334    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	Feb 24 01:24:53 pause-966618 kubelet[6749]: I0224 01:24:53.800830    6749 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-966618 -n pause-966618
helpers_test.go:261: (dbg) Run:  kubectl --context pause-966618 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (77.64s)

                                                
                                    

Test pass (269/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.7
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.1/json-events 3.93
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.36
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
19 TestBinaryMirror 0.62
20 TestOffline 85.01
22 TestAddons/Setup 151.24
24 TestAddons/parallel/Registry 16.62
25 TestAddons/parallel/Ingress 23.47
26 TestAddons/parallel/MetricsServer 5.63
27 TestAddons/parallel/HelmTiller 13.27
29 TestAddons/parallel/CSI 65.56
30 TestAddons/parallel/Headlamp 11.24
31 TestAddons/parallel/CloudSpanner 5.42
34 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/StoppedEnableDisable 13.26
36 TestCertOptions 65.66
37 TestCertExpiration 319.68
38 TestDockerFlags 95.05
39 TestForceSystemdFlag 62.49
40 TestForceSystemdEnv 82.87
41 TestKVMDriverInstallOrUpdate 5.7
45 TestErrorSpam/setup 53.98
46 TestErrorSpam/start 0.35
47 TestErrorSpam/status 0.74
48 TestErrorSpam/pause 1.22
49 TestErrorSpam/unpause 1.32
50 TestErrorSpam/stop 4.22
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 73.47
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 70.54
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.09
61 TestFunctional/serial/CacheCmd/cache/add_remote 5.11
62 TestFunctional/serial/CacheCmd/cache/add_local 1.38
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
67 TestFunctional/serial/CacheCmd/cache/delete 0.09
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
70 TestFunctional/serial/ExtraConfig 47.59
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 1.11
73 TestFunctional/serial/LogsFileCmd 1.18
75 TestFunctional/parallel/ConfigCmd 0.33
76 TestFunctional/parallel/DashboardCmd 15.45
77 TestFunctional/parallel/DryRun 0.27
78 TestFunctional/parallel/InternationalLanguage 0.14
79 TestFunctional/parallel/StatusCmd 1.05
83 TestFunctional/parallel/ServiceCmdConnect 9.57
84 TestFunctional/parallel/AddonsCmd 0.19
85 TestFunctional/parallel/PersistentVolumeClaim 56.01
87 TestFunctional/parallel/SSHCmd 0.61
88 TestFunctional/parallel/CpCmd 0.86
89 TestFunctional/parallel/MySQL 36.74
90 TestFunctional/parallel/FileSync 0.2
91 TestFunctional/parallel/CertSync 1.37
95 TestFunctional/parallel/NodeLabels 0.06
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
99 TestFunctional/parallel/License 0.13
100 TestFunctional/parallel/Version/short 0.05
101 TestFunctional/parallel/Version/components 0.74
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.28
107 TestFunctional/parallel/ImageCommands/Setup 1.2
108 TestFunctional/parallel/DockerEnv/bash 0.92
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.88
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.45
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.77
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.02
116 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.35
117 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
118 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.57
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
120 TestFunctional/parallel/ProfileCmd/profile_list 0.32
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
122 TestFunctional/parallel/MountCmd/any-port 20.99
123 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.42
132 TestFunctional/parallel/MountCmd/specific-port 1.94
133 TestFunctional/delete_addon-resizer_images 0.15
134 TestFunctional/delete_my-image_image 0.06
135 TestFunctional/delete_minikube_cached_images 0.06
136 TestGvisorAddon 409.11
139 TestImageBuild/serial/NormalBuild 2.29
140 TestImageBuild/serial/BuildWithBuildArg 1.44
141 TestImageBuild/serial/BuildWithDockerIgnore 0.46
142 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.34
145 TestIngressAddonLegacy/StartLegacyK8sCluster 107.22
147 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.21
148 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
149 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.49
152 TestJSONOutput/start/Command 108.96
153 TestJSONOutput/start/Audit 0
155 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/pause/Command 0.59
159 TestJSONOutput/pause/Audit 0
161 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/unpause/Command 0.52
165 TestJSONOutput/unpause/Audit 0
167 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/stop/Command 13.11
171 TestJSONOutput/stop/Audit 0
173 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
175 TestErrorJSONOutput 0.42
180 TestMainNoArgs 0.04
181 TestMinikubeProfile 121.49
184 TestMountStart/serial/StartWithMountFirst 31.03
185 TestMountStart/serial/VerifyMountFirst 0.37
186 TestMountStart/serial/StartWithMountSecond 29.03
187 TestMountStart/serial/VerifyMountSecond 0.37
188 TestMountStart/serial/DeleteFirst 0.91
189 TestMountStart/serial/VerifyMountPostDelete 0.37
190 TestMountStart/serial/Stop 2.22
191 TestMountStart/serial/RestartStopped 25.01
192 TestMountStart/serial/VerifyMountPostStop 0.37
195 TestMultiNode/serial/FreshStart2Nodes 131.78
196 TestMultiNode/serial/DeployApp2Nodes 4.85
197 TestMultiNode/serial/PingHostFrom2Pods 0.86
198 TestMultiNode/serial/AddNode 55.44
199 TestMultiNode/serial/ProfileList 0.26
200 TestMultiNode/serial/CopyFile 7.24
201 TestMultiNode/serial/StopNode 3.92
203 TestMultiNode/serial/RestartKeepsNodes 257.18
204 TestMultiNode/serial/DeleteNode 1.74
205 TestMultiNode/serial/StopMultiNode 25.61
206 TestMultiNode/serial/RestartMultiNode 106.34
207 TestMultiNode/serial/ValidateNameConflict 57.17
212 TestPreload 165.5
214 TestScheduledStopUnix 125.14
215 TestSkaffold 88.19
218 TestRunningBinaryUpgrade 184.68
220 TestKubernetesUpgrade 238.34
233 TestStoppedBinaryUpgrade/Setup 0.36
234 TestStoppedBinaryUpgrade/Upgrade 194.66
235 TestStoppedBinaryUpgrade/MinikubeLogs 1
237 TestPause/serial/Start 95.06
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
248 TestNoKubernetes/serial/StartWithK8s 68.88
249 TestNetworkPlugins/group/auto/Start 82.06
250 TestNoKubernetes/serial/StartWithStopK8s 19.82
251 TestNoKubernetes/serial/Start 33.51
252 TestNetworkPlugins/group/kindnet/Start 80.8
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
254 TestNoKubernetes/serial/ProfileList 2.08
255 TestNoKubernetes/serial/Stop 2.21
256 TestNoKubernetes/serial/StartNoArgs 50.26
257 TestNetworkPlugins/group/auto/KubeletFlags 0.21
258 TestNetworkPlugins/group/auto/NetCatPod 13.29
259 TestNetworkPlugins/group/auto/DNS 0.17
260 TestNetworkPlugins/group/auto/Localhost 0.14
261 TestNetworkPlugins/group/auto/HairPin 0.16
262 TestNetworkPlugins/group/calico/Start 119.62
263 TestNetworkPlugins/group/custom-flannel/Start 133.64
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
265 TestNetworkPlugins/group/false/Start 134.59
266 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
267 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
268 TestNetworkPlugins/group/kindnet/NetCatPod 13.43
269 TestNetworkPlugins/group/kindnet/DNS 0.26
270 TestNetworkPlugins/group/kindnet/Localhost 0.17
271 TestNetworkPlugins/group/kindnet/HairPin 0.15
272 TestNetworkPlugins/group/enable-default-cni/Start 98.66
273 TestNetworkPlugins/group/calico/ControllerPod 5.02
274 TestNetworkPlugins/group/calico/KubeletFlags 0.25
275 TestNetworkPlugins/group/calico/NetCatPod 18.61
276 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
277 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.39
278 TestNetworkPlugins/group/calico/DNS 0.3
279 TestNetworkPlugins/group/calico/Localhost 0.18
280 TestNetworkPlugins/group/calico/HairPin 0.21
281 TestNetworkPlugins/group/custom-flannel/DNS 0.28
282 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
283 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
284 TestNetworkPlugins/group/false/KubeletFlags 0.25
285 TestNetworkPlugins/group/false/NetCatPod 14.45
286 TestNetworkPlugins/group/flannel/Start 92.55
287 TestNetworkPlugins/group/false/DNS 0.2
288 TestNetworkPlugins/group/false/Localhost 0.18
289 TestNetworkPlugins/group/false/HairPin 0.15
290 TestNetworkPlugins/group/bridge/Start 101.28
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
293 TestNetworkPlugins/group/kubenet/Start 117.44
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
298 TestStartStop/group/old-k8s-version/serial/FirstStart 196.86
299 TestNetworkPlugins/group/flannel/ControllerPod 5.54
300 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
301 TestNetworkPlugins/group/flannel/NetCatPod 13.4
302 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
303 TestNetworkPlugins/group/bridge/NetCatPod 13.32
304 TestNetworkPlugins/group/flannel/DNS 0.21
305 TestNetworkPlugins/group/flannel/Localhost 0.2
306 TestNetworkPlugins/group/flannel/HairPin 0.17
307 TestNetworkPlugins/group/bridge/DNS 0.25
308 TestNetworkPlugins/group/bridge/Localhost 0.22
309 TestNetworkPlugins/group/bridge/HairPin 0.21
311 TestStartStop/group/no-preload/serial/FirstStart 100.08
312 TestNetworkPlugins/group/kubenet/KubeletFlags 0.78
313 TestNetworkPlugins/group/kubenet/NetCatPod 13.4
315 TestStartStop/group/embed-certs/serial/FirstStart 100.54
316 TestNetworkPlugins/group/kubenet/DNS 0.17
317 TestNetworkPlugins/group/kubenet/Localhost 0.14
318 TestNetworkPlugins/group/kubenet/HairPin 0.16
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 119.28
321 TestStartStop/group/no-preload/serial/DeployApp 10.46
322 TestStartStop/group/embed-certs/serial/DeployApp 9.43
323 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
324 TestStartStop/group/no-preload/serial/Stop 13.17
325 TestStartStop/group/old-k8s-version/serial/DeployApp 8.49
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
327 TestStartStop/group/embed-certs/serial/Stop 13.12
328 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
329 TestStartStop/group/old-k8s-version/serial/Stop 13.15
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
331 TestStartStop/group/no-preload/serial/SecondStart 314.78
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
333 TestStartStop/group/embed-certs/serial/SecondStart 319.67
334 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
335 TestStartStop/group/old-k8s-version/serial/SecondStart 475.94
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 337.78
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
344 TestStartStop/group/no-preload/serial/Pause 2.58
345 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
347 TestStartStop/group/newest-cni/serial/FirstStart 77.22
348 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
349 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
350 TestStartStop/group/embed-certs/serial/Pause 2.9
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
355 TestStartStop/group/newest-cni/serial/Stop 13.12
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
358 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
359 TestStartStop/group/newest-cni/serial/SecondStart 46.56
360 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
363 TestStartStop/group/newest-cni/serial/Pause 2.32
364 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
365 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
366 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
367 TestStartStop/group/old-k8s-version/serial/Pause 2.38
x
+
TestDownloadOnly/v1.16.0/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-396438 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-396438 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (5.698651298s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-396438
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-396438: exit status 85 (60.432773ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-396438 | jenkins | v1.29.0 | 24 Feb 23 00:40 UTC |          |
	|         | -p download-only-396438        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:40:58
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:40:58.260610   11143 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:40:58.261144   11143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:40:58.261185   11143 out.go:309] Setting ErrFile to fd 2...
	I0224 00:40:58.261203   11143 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:40:58.261500   11143 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	W0224 00:40:58.261894   11143 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-4074/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-4074/.minikube/config/config.json: no such file or directory
	I0224 00:40:58.262708   11143 out.go:303] Setting JSON to true
	I0224 00:40:58.263447   11143 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1408,"bootTime":1677197851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:40:58.263502   11143 start.go:135] virtualization: kvm guest
	I0224 00:40:58.265910   11143 out.go:97] [download-only-396438] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0224 00:40:58.265997   11143 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 00:40:58.267330   11143 out.go:169] MINIKUBE_LOCATION=15909
	I0224 00:40:58.266091   11143 notify.go:220] Checking for updates...
	I0224 00:40:58.269743   11143 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:40:58.271110   11143 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 00:40:58.272416   11143 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 00:40:58.273642   11143 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 00:40:58.275830   11143 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 00:40:58.276005   11143 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:40:58.395418   11143 out.go:97] Using the kvm2 driver based on user configuration
	I0224 00:40:58.395439   11143 start.go:296] selected driver: kvm2
	I0224 00:40:58.395444   11143 start.go:857] validating driver "kvm2" against <nil>
	I0224 00:40:58.395732   11143 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:40:58.395842   11143 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-4074/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0224 00:40:58.409506   11143 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0224 00:40:58.409550   11143 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 00:40:58.409953   11143 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0224 00:40:58.410082   11143 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 00:40:58.410109   11143 cni.go:84] Creating CNI manager for ""
	I0224 00:40:58.410125   11143 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 00:40:58.410132   11143 start_flags.go:319] config:
	{Name:download-only-396438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-396438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:40:58.410283   11143 iso.go:125] acquiring lock: {Name:mkc3d6185dc03bdb5dc9fb9cd39dd085e0eef640 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 00:40:58.412119   11143 out.go:97] Downloading VM boot image ...
	I0224 00:40:58.412141   11143 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15909-4074/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
	I0224 00:41:00.191082   11143 out.go:97] Starting control plane node download-only-396438 in cluster download-only-396438
	I0224 00:41:00.191104   11143 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 00:41:00.214201   11143 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 00:41:00.214237   11143 cache.go:57] Caching tarball of preloaded images
	I0224 00:41:00.214400   11143 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 00:41:00.216422   11143 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0224 00:41:00.216449   11143 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 00:41:00.254487   11143 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 00:41:02.632603   11143 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 00:41:02.632686   11143 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-4074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-396438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-396438 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-396438 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=kvm2 : (3.925775908s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-396438
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-396438: exit status 85 (60.800995ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-396438 | jenkins | v1.29.0 | 24 Feb 23 00:40 UTC |          |
	|         | -p download-only-396438        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-396438 | jenkins | v1.29.0 | 24 Feb 23 00:41 UTC |          |
	|         | -p download-only-396438        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 00:41:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 00:41:04.024046   11179 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:41:04.024155   11179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:41:04.024163   11179 out.go:309] Setting ErrFile to fd 2...
	I0224 00:41:04.024167   11179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:41:04.024508   11179 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	W0224 00:41:04.024732   11179 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-4074/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-4074/.minikube/config/config.json: no such file or directory
	I0224 00:41:04.025294   11179 out.go:303] Setting JSON to true
	I0224 00:41:04.026322   11179 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1413,"bootTime":1677197851,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:41:04.026387   11179 start.go:135] virtualization: kvm guest
	I0224 00:41:04.028427   11179 out.go:97] [download-only-396438] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:41:04.029839   11179 out.go:169] MINIKUBE_LOCATION=15909
	I0224 00:41:04.028577   11179 notify.go:220] Checking for updates...
	I0224 00:41:04.032363   11179 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:41:04.033773   11179 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 00:41:04.034987   11179 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 00:41:04.036182   11179 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-396438"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-396438
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-217199 --alsologtostderr --binary-mirror http://127.0.0.1:40325 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-217199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-217199
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (85.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-579785 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-579785 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m23.935910961s)
helpers_test.go:175: Cleaning up "offline-docker-579785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-579785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-579785: (1.07265339s)
--- PASS: TestOffline (85.01s)

                                                
                                    
x
+
TestAddons/Setup (151.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-031105 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-031105 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.235721456s)
--- PASS: TestAddons/Setup (151.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 22.843432ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dt7ff" [e9b94408-133b-4eb4-bba3-fa49213087f2] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014385338s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k652b" [5fd61881-c7f2-48b0-98d1-f35976b855f4] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015650337s
addons_test.go:305: (dbg) Run:  kubectl --context addons-031105 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-031105 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-031105 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.969497787s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 ip
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.62s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-031105 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-031105 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-031105 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [df54b152-57a5-4d5b-b981-14fae0d9d154] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [df54b152-57a5-4d5b-b981-14fae0d9d154] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.009295045s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-031105 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.104
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-031105 addons disable ingress-dns --alsologtostderr -v=1: (1.780001197s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-031105 addons disable ingress --alsologtostderr -v=1: (7.669152199s)
--- PASS: TestAddons/parallel/Ingress (23.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 23.202224ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-pkfkv" [f76fd0dd-b015-406a-8091-17ec266ebeaa] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019186547s
addons_test.go:380: (dbg) Run:  kubectl --context addons-031105 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.27s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 23.02097ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-cjcgw" [903257c4-cd74-4c06-bf44-e5f3a13945ec] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012129718s
addons_test.go:438: (dbg) Run:  kubectl --context addons-031105 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-031105 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.748498919s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 7.762376ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-031105 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/02/24 00:43:57 [DEBUG] GET http://192.168.39.104:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-031105 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c3dd88f6-5902-4c55-a963-911752ba7935] Pending
helpers_test.go:344: "task-pv-pod" [c3dd88f6-5902-4c55-a963-911752ba7935] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c3dd88f6-5902-4c55-a963-911752ba7935] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.012869583s
addons_test.go:549: (dbg) Run:  kubectl --context addons-031105 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-031105 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-031105 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-031105 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-031105 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-031105 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-031105 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-031105 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c0985ddf-abe0-4b64-a3d9-de5471b30fe3] Pending
helpers_test.go:344: "task-pv-pod-restore" [c0985ddf-abe0-4b64-a3d9-de5471b30fe3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c0985ddf-abe0-4b64-a3d9-de5471b30fe3] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.013852356s
addons_test.go:591: (dbg) Run:  kubectl --context addons-031105 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-031105 delete pod task-pv-pod-restore: (1.05339433s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-031105 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-031105 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-031105 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.546877141s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-031105 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-031105 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-031105 --alsologtostderr -v=1: (1.22828786s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-b6s9m" [bf601a5d-8131-46bc-a4d5-8ac9c6774902] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-b6s9m" [bf601a5d-8131-46bc-a4d5-8ac9c6774902] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.012155597s
--- PASS: TestAddons/parallel/Headlamp (11.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-c7h9h" [b88e7180-e968-46a9-9a49-d95c1e84cbf8] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009578235s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-031105
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-031105 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-031105 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-031105
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-031105: (13.095511827s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-031105
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-031105
--- PASS: TestAddons/StoppedEnableDisable (13.26s)

                                                
                                    
x
+
TestCertOptions (65.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-398057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0224 01:23:40.891492   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-398057 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m4.101205939s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-398057 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-398057 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-398057 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-398057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-398057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-398057: (1.087524831s)
--- PASS: TestCertOptions (65.66s)

                                                
                                    
x
+
TestCertExpiration (319.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843515 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843515 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m42.656443859s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843515 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843515 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (35.853243315s)
helpers_test.go:175: Cleaning up "cert-expiration-843515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-843515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-843515: (1.165866786s)
--- PASS: TestCertExpiration (319.68s)

                                                
                                    
x
+
TestDockerFlags (95.05s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-724706 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0224 01:21:43.937387   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-724706 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m33.544581788s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-724706 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-724706 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-724706" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-724706
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-724706: (1.097438411s)
--- PASS: TestDockerFlags (95.05s)

                                                
                                    
x
+
TestForceSystemdFlag (62.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-415960 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-415960 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m1.192673116s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-415960 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-415960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-415960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-415960: (1.042849993s)
--- PASS: TestForceSystemdFlag (62.49s)

                                                
                                    
x
+
TestForceSystemdEnv (82.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-457559 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-457559 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m21.42802631s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-457559 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-457559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-457559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-457559: (1.158601423s)
--- PASS: TestForceSystemdEnv (82.87s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.70s)

                                                
                                    
x
+
TestErrorSpam/setup (53.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-704501 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-704501 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-704501 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-704501 --driver=kvm2 : (53.984177728s)
--- PASS: TestErrorSpam/setup (53.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 pause
--- PASS: TestErrorSpam/pause (1.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (4.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 stop: (4.082662359s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-704501 --log_dir /tmp/nospam-704501 stop
--- PASS: TestErrorSpam/stop (4.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /home/jenkins/minikube-integration/15909-4074/.minikube/files/etc/test/nested/copy/11131/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 start -p functional-081341 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m13.47152131s)
--- PASS: TestFunctional/serial/StartWithProxy (73.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (70.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --alsologtostderr -v=8
E0224 00:48:40.891995   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:40.897894   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:40.908115   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:40.928433   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:40.968709   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:41.049040   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:41.209454   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:41.530134   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:42.170638   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:43.450786   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:46.011394   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:48:51.132199   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
functional_test.go:653: (dbg) Done: out/minikube-linux-amd64 start -p functional-081341 --alsologtostderr -v=8: (1m10.535398104s)
functional_test.go:657: soft start took 1m10.536094233s for "functional-081341" cluster.
--- PASS: TestFunctional/serial/SoftStart (70.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-081341 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:3.1: (1.720250567s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:3.3: (1.684739356s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 cache add k8s.gcr.io/pause:latest: (1.701357502s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-081341 /tmp/TestFunctionalserialCacheCmdcacheadd_local4178189695/001
functional_test.go:1083: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache add minikube-local-cache-test:functional-081341
functional_test.go:1083: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 cache add minikube-local-cache-test:functional-081341: (1.040270298s)
functional_test.go:1088: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache delete minikube-local-cache-test:functional-081341
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-081341
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (212.918469ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cache reload
E0224 00:49:01.372591   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
functional_test.go:1152: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 cache reload: (1.013123608s)
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 kubectl -- --context functional-081341 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-081341 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0224 00:49:21.852786   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-linux-amd64 start -p functional-081341 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.587646309s)
functional_test.go:755: restart took 47.587754705s for "functional-081341" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-081341 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 logs
functional_test.go:1230: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 logs: (1.109502113s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 logs --file /tmp/TestFunctionalserialLogsFileCmd3552150609/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 logs --file /tmp/TestFunctionalserialLogsFileCmd3552150609/001/logs.txt: (1.179487451s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 config get cpus: exit status 14 (52.465265ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 config get cpus: exit status 14 (46.717889ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-081341 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-081341 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17327: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-081341 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (136.387219ms)

                                                
                                                
-- stdout --
	* [functional-081341] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 00:50:16.347615   17008 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:50:16.347768   17008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:50:16.347779   17008 out.go:309] Setting ErrFile to fd 2...
	I0224 00:50:16.347787   17008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:50:16.347937   17008 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 00:50:16.348645   17008 out.go:303] Setting JSON to false
	I0224 00:50:16.349807   17008 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1966,"bootTime":1677197851,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:50:16.349887   17008 start.go:135] virtualization: kvm guest
	I0224 00:50:16.352477   17008 out.go:177] * [functional-081341] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 00:50:16.353796   17008 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:50:16.353865   17008 notify.go:220] Checking for updates...
	I0224 00:50:16.355255   17008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:50:16.357320   17008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 00:50:16.359787   17008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 00:50:16.361424   17008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:50:16.363058   17008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:50:16.364914   17008 config.go:182] Loaded profile config "functional-081341": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:50:16.365491   17008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 00:50:16.365549   17008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 00:50:16.379918   17008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0224 00:50:16.380331   17008 main.go:141] libmachine: () Calling .GetVersion
	I0224 00:50:16.380909   17008 main.go:141] libmachine: Using API Version  1
	I0224 00:50:16.380935   17008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 00:50:16.381358   17008 main.go:141] libmachine: () Calling .GetMachineName
	I0224 00:50:16.381551   17008 main.go:141] libmachine: (functional-081341) Calling .DriverName
	I0224 00:50:16.381775   17008 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:50:16.382186   17008 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 00:50:16.382255   17008 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 00:50:16.396565   17008 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0224 00:50:16.396996   17008 main.go:141] libmachine: () Calling .GetVersion
	I0224 00:50:16.397411   17008 main.go:141] libmachine: Using API Version  1
	I0224 00:50:16.397430   17008 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 00:50:16.397856   17008 main.go:141] libmachine: () Calling .GetMachineName
	I0224 00:50:16.398049   17008 main.go:141] libmachine: (functional-081341) Calling .DriverName
	I0224 00:50:16.430030   17008 out.go:177] * Using the kvm2 driver based on existing profile
	I0224 00:50:16.431359   17008 start.go:296] selected driver: kvm2
	I0224 00:50:16.431372   17008 start.go:857] validating driver "kvm2" against &{Name:functional-081341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-081341 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:50:16.431457   17008 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:50:16.433576   17008 out.go:177] 
	W0224 00:50:16.434917   17008 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 00:50:16.436310   17008 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 start -p functional-081341 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-081341 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (135.054681ms)

                                                
                                                
-- stdout --
	* [functional-081341] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 00:50:16.612009   17062 out.go:296] Setting OutFile to fd 1 ...
	I0224 00:50:16.612128   17062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:50:16.612136   17062 out.go:309] Setting ErrFile to fd 2...
	I0224 00:50:16.612140   17062 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 00:50:16.612275   17062 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 00:50:16.612754   17062 out.go:303] Setting JSON to false
	I0224 00:50:16.613643   17062 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1966,"bootTime":1677197851,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 00:50:16.613708   17062 start.go:135] virtualization: kvm guest
	I0224 00:50:16.615991   17062 out.go:177] * [functional-081341] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0224 00:50:16.617703   17062 notify.go:220] Checking for updates...
	I0224 00:50:16.617711   17062 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 00:50:16.619168   17062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 00:50:16.620902   17062 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	I0224 00:50:16.622269   17062 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	I0224 00:50:16.623885   17062 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 00:50:16.625221   17062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 00:50:16.626740   17062 config.go:182] Loaded profile config "functional-081341": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 00:50:16.627066   17062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 00:50:16.627103   17062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 00:50:16.641508   17062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I0224 00:50:16.641848   17062 main.go:141] libmachine: () Calling .GetVersion
	I0224 00:50:16.642476   17062 main.go:141] libmachine: Using API Version  1
	I0224 00:50:16.642497   17062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 00:50:16.643188   17062 main.go:141] libmachine: () Calling .GetMachineName
	I0224 00:50:16.644939   17062 main.go:141] libmachine: (functional-081341) Calling .DriverName
	I0224 00:50:16.645153   17062 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 00:50:16.645544   17062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 00:50:16.645586   17062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 00:50:16.659690   17062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35605
	I0224 00:50:16.660069   17062 main.go:141] libmachine: () Calling .GetVersion
	I0224 00:50:16.660462   17062 main.go:141] libmachine: Using API Version  1
	I0224 00:50:16.660484   17062 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 00:50:16.660774   17062 main.go:141] libmachine: () Calling .GetMachineName
	I0224 00:50:16.660958   17062 main.go:141] libmachine: (functional-081341) Calling .DriverName
	I0224 00:50:16.693355   17062 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0224 00:50:16.694945   17062 start.go:296] selected driver: kvm2
	I0224 00:50:16.694964   17062 start.go:857] validating driver "kvm2" against &{Name:functional-081341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-081341 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 00:50:16.695098   17062 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 00:50:16.697577   17062 out.go:177] 
	W0224 00:50:16.699200   17062 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 00:50:16.700651   17062 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 status
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-081341 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-081341 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-l5d92" [f1f1dc53-6b99-4848-9f40-a9a50295034b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-l5d92" [f1f1dc53-6b99-4848-9f40-a9a50295034b] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.013807188s
functional_test.go:1617: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 service hello-node-connect --url
functional_test.go:1623: found endpoint for hello-node-connect: http://192.168.39.154:31867
functional_test.go:1643: http://192.168.39.154:31867: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-l5d92

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.154:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.154:31867
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c77f51d9-d4ee-46c7-b393-dbef65d3d1ec] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.018312625s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-081341 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-081341 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-081341 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-081341 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-081341 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14af9b7d-90e9-4bca-926f-6f39b70ef4ee] Pending
helpers_test.go:344: "sp-pod" [14af9b7d-90e9-4bca-926f-6f39b70ef4ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14af9b7d-90e9-4bca-926f-6f39b70ef4ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 35.085448618s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-081341 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-081341 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-081341 delete -f testdata/storage-provisioner/pod.yaml: (1.223689386s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-081341 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b5f0804d-8afe-4fcb-82bc-62813c2ea7f3] Pending
helpers_test.go:344: "sp-pod" [b5f0804d-8afe-4fcb-82bc-62813c2ea7f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b5f0804d-8afe-4fcb-82bc-62813c2ea7f3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010616499s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-081341 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh -n functional-081341 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 cp functional-081341:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd258740314/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh -n functional-081341 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-081341 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-7sll6" [723e75f2-f739-4008-975f-a9b3bb95722c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-7sll6" [723e75f2-f739-4008-975f-a9b3bb95722c] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.007115499s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;": exit status 1 (525.688566ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;": exit status 1 (429.702746ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;": exit status 1 (174.651677ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-081341 exec mysql-888f84dd9-7sll6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/11131/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /etc/test/nested/copy/11131/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/11131.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /etc/ssl/certs/11131.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/11131.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /usr/share/ca-certificates/11131.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/111312.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /etc/ssl/certs/111312.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/111312.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /usr/share/ca-certificates/111312.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-081341 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh "sudo systemctl is-active crio": exit status 1 (206.369817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-081341 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-081341
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-081341
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-081341 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-081341 | a80c51250401c | 30B    |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| gcr.io/google-containers/addon-resizer      | functional-081341 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-081341 image ls --format json:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"a80c51250401c2871fa0b36c89daf72d25ae850a099eb3f00d8823dfabc355bd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-081341"],"size":"30"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c7
90e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-081341"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300
000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-081341 image ls --format yaml:
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-081341
size: "32900000"
- id: a80c51250401c2871fa0b36c89daf72d25ae850a099eb3f00d8823dfabc355bd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-081341
size: "30"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh pgrep buildkitd: exit status 1 (221.807291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image build -t localhost/my-image:functional-081341 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image build -t localhost/my-image:functional-081341 testdata/build: (3.792390011s)
functional_test.go:317: (dbg) Stdout: out/minikube-linux-amd64 -p functional-081341 image build -t localhost/my-image:functional-081341 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f56854ad8cde
Removing intermediate container f56854ad8cde
---> 5aee7007e1e1
Step 3/3 : ADD content.txt /
---> c3f5a9701cd6
Successfully built c3f5a9701cd6
Successfully tagged localhost/my-image:functional-081341
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
2023/02/24 00:50:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.127000401s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-081341
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-081341 docker-env) && out/minikube-linux-amd64 status -p functional-081341"
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-081341 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341: (4.657535926s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341: (2.247834026s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0224 00:50:02.813757   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.096334294s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image load --daemon gcr.io/google-containers/addon-resizer:functional-081341: (4.377483698s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image save gcr.io/google-containers/addon-resizer:functional-081341 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image save gcr.io/google-containers/addon-resizer:functional-081341 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.019988271s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 service list -o json
functional_test.go:1552: Took "344.994727ms" to run "out/minikube-linux-amd64 -p functional-081341 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image rm gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.281567288s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1312: Took "271.909351ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1326: Took "47.70688ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1363: Took "267.461423ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1376: Took "49.143244ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-081341 /tmp/TestFunctionalparallelMountCmdany-port2663648389/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677199812581190982" to /tmp/TestFunctionalparallelMountCmdany-port2663648389/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677199812581190982" to /tmp/TestFunctionalparallelMountCmdany-port2663648389/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677199812581190982" to /tmp/TestFunctionalparallelMountCmdany-port2663648389/001/test-1677199812581190982
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.410653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 00:50 test-1677199812581190982
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh cat /mount-9p/test-1677199812581190982
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-081341 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [180d5e94-390a-4943-b947-ae6e5f1f2cb8] Pending
helpers_test.go:344: "busybox-mount" [180d5e94-390a-4943-b947-ae6e5f1f2cb8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [180d5e94-390a-4943-b947-ae6e5f1f2cb8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [180d5e94-390a-4943-b947-ae6e5f1f2cb8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.009964811s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-081341 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-081341 /tmp/TestFunctionalparallelMountCmdany-port2663648389/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 image save --daemon gcr.io/google-containers/addon-resizer:functional-081341
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-081341 image save --daemon gcr.io/google-containers/addon-resizer:functional-081341: (3.289920453s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-081341
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-081341 /tmp/TestFunctionalparallelMountCmdspecific-port1354749760/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.698754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-081341 /tmp/TestFunctionalparallelMountCmdspecific-port1354749760/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-081341 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-081341 ssh "sudo umount -f /mount-9p": exit status 1 (210.4344ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-081341 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-081341 /tmp/TestFunctionalparallelMountCmdspecific-port1354749760/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-081341
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-081341
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-081341
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestGvisorAddon (409.11s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-694714 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-694714 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m21.853454006s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-694714 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-694714 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.947011825s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-694714 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-694714 addons enable gvisor: (3.590939719s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [5137b71e-b02a-40c1-b222-044c9aa10ec7] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.016398341s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-694714 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-694714 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [ce6e1b97-ad04-4452-9fbe-56da7ec3c324] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-untrusted" [ce6e1b97-ad04-4452-9fbe-56da7ec3c324] Running
E0224 01:22:52.900550   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 51.010499817s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [635621b0-2694-43df-ad92-3e3596a87f5b] Running
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008651793s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-694714
E0224 01:23:13.380938   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-694714: (1m32.520270052s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-694714 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0224 01:24:52.742370   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-694714 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m9.227029869s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [5137b71e-b02a-40c1-b222-044c9aa10ec7] Running
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.018035263s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:344: "nginx-untrusted" [ce6e1b97-ad04-4452-9fbe-56da7ec3c324] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.007777314s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [635621b0-2694-43df-ad92-3e3596a87f5b] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008245708s
helpers_test.go:175: Cleaning up "gvisor-694714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-694714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-694714: (1.459858135s)
--- PASS: TestGvisorAddon (409.11s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-959373
image_test.go:73: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-959373: (2.287429335s)
--- PASS: TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-959373
image_test.go:94: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-959373: (1.443539932s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-959373
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-959373
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (107.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-443815 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-443815 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m47.215948976s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (107.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons enable ingress --alsologtostderr -v=5
E0224 00:53:40.891799   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons enable ingress --alsologtostderr -v=5: (14.210807426s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-443815 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0224 00:54:08.576161   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-443815 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.33445726s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-443815 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-443815 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [72d396e7-e5e3-4b07-b13a-81dbf7e1edd9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [72d396e7-e5e3-4b07-b13a-81dbf7e1edd9] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007030024s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-443815 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.46
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons disable ingress-dns --alsologtostderr -v=1: (2.694962298s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-443815 addons disable ingress --alsologtostderr -v=1: (7.335657718s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.49s)

                                                
                                    
x
+
TestJSONOutput/start/Command (108.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-992692 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0224 00:54:52.742798   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:52.748052   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:52.758316   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:52.778600   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:52.818888   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:52.899277   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:53.059694   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:53.380261   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:54.020408   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:55.300915   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:54:57.861608   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:55:02.982379   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:55:13.222682   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:55:33.703494   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 00:56:14.663918   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-992692 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m48.961273484s)
--- PASS: TestJSONOutput/start/Command (108.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-992692 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-992692 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-992692 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-992692 --output=json --user=testUser: (13.106042689s)
--- PASS: TestJSONOutput/stop/Command (13.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.42s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-346811 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-346811 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.103699ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba443b35-42a8-4074-8c28-8069266ef138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-346811] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"84983f3d-205a-482d-aaed-01cf973e77fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"4cbb40d5-95a3-431b-9772-f880517df392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73d237a1-f8c4-4f73-b7e1-944ff93a84c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig"}}
	{"specversion":"1.0","id":"198d8636-c1ea-4645-b598-20a851ec0507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube"}}
	{"specversion":"1.0","id":"61502e70-0f08-44ba-aa86-f615632b5d3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ad9ea89d-6894-49a0-9e6d-e1d6d115dc82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f34eb2f-c6c7-4008-94e1-91e68cdf32aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-346811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-346811
--- PASS: TestErrorJSONOutput (0.42s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (121.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-942046 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-942046 --driver=kvm2 : (57.107543553s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-945237 --driver=kvm2 
E0224 00:57:36.585789   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-945237 --driver=kvm2 : (1m1.311575746s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-942046
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-945237
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-945237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-945237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-945237: (1.010242207s)
helpers_test.go:175: Cleaning up "first-942046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-942046
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-942046: (1.015309807s)
--- PASS: TestMinikubeProfile (121.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-355422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0224 00:58:40.892358   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 00:58:53.748863   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:53.754162   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:53.764410   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:53.784696   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:53.824978   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:53.905313   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:54.065858   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:54.483269   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:55.124206   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:56.404714   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:58:58.965636   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:59:04.085890   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-355422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.028684163s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-355422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-355422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-372090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0224 00:59:14.326644   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 00:59:34.806949   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-372090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.033642842s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-355422 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-372090
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-372090: (2.220628011s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-372090
E0224 00:59:52.742102   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-372090: (24.010348807s)
--- PASS: TestMountStart/serial/RestartStopped (25.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-372090 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-858631 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0224 01:00:15.767303   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:00:20.426353   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 01:01:37.688178   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-858631 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m11.366539217s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-858631 -- rollout status deployment/busybox: (3.169717034s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-bkl2m -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-pmnbg -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-bkl2m -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-pmnbg -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-bkl2m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-pmnbg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-bkl2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-bkl2m -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-pmnbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-858631 -- exec busybox-6b86dd6d48-pmnbg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-858631 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-858631 -v 3 --alsologtostderr: (54.84085972s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.44s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp testdata/cp-test.txt multinode-858631:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631:/home/docker/cp-test.txt multinode-858631-m02:/home/docker/cp-test_multinode-858631_multinode-858631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test_multinode-858631_multinode-858631-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631:/home/docker/cp-test.txt multinode-858631-m03:/home/docker/cp-test_multinode-858631_multinode-858631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test_multinode-858631_multinode-858631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp testdata/cp-test.txt multinode-858631-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt multinode-858631:/home/docker/cp-test_multinode-858631-m02_multinode-858631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test_multinode-858631-m02_multinode-858631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m02:/home/docker/cp-test.txt multinode-858631-m03:/home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test_multinode-858631-m02_multinode-858631-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp testdata/cp-test.txt multinode-858631-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3133866316/001/cp-test_multinode-858631-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt multinode-858631:/home/docker/cp-test_multinode-858631-m03_multinode-858631.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631 "sudo cat /home/docker/cp-test_multinode-858631-m03_multinode-858631.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 cp multinode-858631-m03:/home/docker/cp-test.txt multinode-858631-m02:/home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 ssh -n multinode-858631-m02 "sudo cat /home/docker/cp-test_multinode-858631-m03_multinode-858631-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-858631 node stop m03: (3.082466424s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status: exit status 7 (420.789976ms)

                                                
                                                
-- stdout --
	multinode-858631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-858631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-858631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr: exit status 7 (419.303126ms)

                                                
                                                
-- stdout --
	multinode-858631
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-858631-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-858631-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 01:03:31.839267   24060 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:03:31.839368   24060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:03:31.839375   24060 out.go:309] Setting ErrFile to fd 2...
	I0224 01:03:31.839379   24060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:03:31.839480   24060 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:03:31.839613   24060 out.go:303] Setting JSON to false
	I0224 01:03:31.839642   24060 mustload.go:65] Loading cluster: multinode-858631
	I0224 01:03:31.839904   24060 notify.go:220] Checking for updates...
	I0224 01:03:31.841620   24060 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:03:31.841637   24060 status.go:255] checking status of multinode-858631 ...
	I0224 01:03:31.841989   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:31.842046   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:31.856316   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43247
	I0224 01:03:31.856670   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:31.857173   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:31.857198   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:31.857571   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:31.857773   24060 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:03:31.859565   24060 status.go:330] multinode-858631 host status = "Running" (err=<nil>)
	I0224 01:03:31.859582   24060 host.go:66] Checking if "multinode-858631" exists ...
	I0224 01:03:31.859835   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:31.859866   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:31.873649   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0224 01:03:31.873978   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:31.874411   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:31.874435   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:31.874724   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:31.874894   24060 main.go:141] libmachine: (multinode-858631) Calling .GetIP
	I0224 01:03:31.877552   24060 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:03:31.877924   24060 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:03:31.877956   24060 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:03:31.878063   24060 host.go:66] Checking if "multinode-858631" exists ...
	I0224 01:03:31.878342   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:31.878374   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:31.891893   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0224 01:03:31.892273   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:31.892734   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:31.892759   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:31.893070   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:31.893263   24060 main.go:141] libmachine: (multinode-858631) Calling .DriverName
	I0224 01:03:31.893483   24060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 01:03:31.893511   24060 main.go:141] libmachine: (multinode-858631) Calling .GetSSHHostname
	I0224 01:03:31.896401   24060 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:03:31.896842   24060 main.go:141] libmachine: (multinode-858631) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:96:ba:53", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:00:22 +0000 UTC Type:0 Mac:52:54:00:96:ba:53 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-858631 Clientid:01:52:54:00:96:ba:53}
	I0224 01:03:31.896869   24060 main.go:141] libmachine: (multinode-858631) DBG | domain multinode-858631 has defined IP address 192.168.39.217 and MAC address 52:54:00:96:ba:53 in network mk-multinode-858631
	I0224 01:03:31.897035   24060 main.go:141] libmachine: (multinode-858631) Calling .GetSSHPort
	I0224 01:03:31.897240   24060 main.go:141] libmachine: (multinode-858631) Calling .GetSSHKeyPath
	I0224 01:03:31.897397   24060 main.go:141] libmachine: (multinode-858631) Calling .GetSSHUsername
	I0224 01:03:31.897568   24060 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631/id_rsa Username:docker}
	I0224 01:03:31.992855   24060 ssh_runner.go:195] Run: systemctl --version
	I0224 01:03:31.998248   24060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:03:32.014259   24060 kubeconfig.go:92] found "multinode-858631" server: "https://192.168.39.217:8443"
	I0224 01:03:32.014283   24060 api_server.go:165] Checking apiserver status ...
	I0224 01:03:32.014311   24060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 01:03:32.025196   24060 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1850/cgroup
	I0224 01:03:32.034559   24060 api_server.go:181] apiserver freezer: "6:freezer:/kubepods/burstable/pod2a1bcd287381cc62f4271365e9d57dba/fe09023de51d1e49a1fa752b60768dae698e591e4c23166b5a805720a0e8c03a"
	I0224 01:03:32.034610   24060 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod2a1bcd287381cc62f4271365e9d57dba/fe09023de51d1e49a1fa752b60768dae698e591e4c23166b5a805720a0e8c03a/freezer.state
	I0224 01:03:32.042910   24060 api_server.go:203] freezer state: "THAWED"
	I0224 01:03:32.042925   24060 api_server.go:252] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0224 01:03:32.048739   24060 api_server.go:278] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0224 01:03:32.048756   24060 status.go:421] multinode-858631 apiserver status = Running (err=<nil>)
	I0224 01:03:32.048764   24060 status.go:257] multinode-858631 status: &{Name:multinode-858631 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 01:03:32.048777   24060 status.go:255] checking status of multinode-858631-m02 ...
	I0224 01:03:32.049064   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.049094   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.063347   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0224 01:03:32.063697   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.064155   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.064183   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.064448   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.064602   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetState
	I0224 01:03:32.066030   24060 status.go:330] multinode-858631-m02 host status = "Running" (err=<nil>)
	I0224 01:03:32.066058   24060 host.go:66] Checking if "multinode-858631-m02" exists ...
	I0224 01:03:32.066311   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.066342   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.080410   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0224 01:03:32.080782   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.081207   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.081252   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.081583   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.081766   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetIP
	I0224 01:03:32.084656   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:03:32.085057   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:03:32.085079   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:03:32.085187   24060 host.go:66] Checking if "multinode-858631-m02" exists ...
	I0224 01:03:32.085442   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.085487   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.099294   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0224 01:03:32.099631   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.100049   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.100074   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.100327   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.100495   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .DriverName
	I0224 01:03:32.100661   24060 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 01:03:32.100682   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHHostname
	I0224 01:03:32.103248   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:03:32.103678   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f2:a2", ip: ""} in network mk-multinode-858631: {Iface:virbr1 ExpiryTime:2023-02-24 02:01:42 +0000 UTC Type:0 Mac:52:54:00:14:f2:a2 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:multinode-858631-m02 Clientid:01:52:54:00:14:f2:a2}
	I0224 01:03:32.103708   24060 main.go:141] libmachine: (multinode-858631-m02) DBG | domain multinode-858631-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:14:f2:a2 in network mk-multinode-858631
	I0224 01:03:32.103852   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHPort
	I0224 01:03:32.103999   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHKeyPath
	I0224 01:03:32.104152   24060 main.go:141] libmachine: (multinode-858631-m02) Calling .GetSSHUsername
	I0224 01:03:32.104256   24060 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-4074/.minikube/machines/multinode-858631-m02/id_rsa Username:docker}
	I0224 01:03:32.184098   24060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 01:03:32.196512   24060 status.go:257] multinode-858631-m02 status: &{Name:multinode-858631-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 01:03:32.196545   24060 status.go:255] checking status of multinode-858631-m03 ...
	I0224 01:03:32.196828   24060 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:03:32.196861   24060 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:03:32.212039   24060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I0224 01:03:32.212419   24060 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:03:32.212911   24060 main.go:141] libmachine: Using API Version  1
	I0224 01:03:32.212932   24060 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:03:32.213252   24060 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:03:32.213447   24060 main.go:141] libmachine: (multinode-858631-m03) Calling .GetState
	I0224 01:03:32.215081   24060 status.go:330] multinode-858631-m03 host status = "Stopped" (err=<nil>)
	I0224 01:03:32.215095   24060 status.go:343] host is not running, skipping remaining checks
	I0224 01:03:32.215100   24060 status.go:257] multinode-858631-m03 status: &{Name:multinode-858631-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (257.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-858631
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-858631
E0224 01:03:53.748234   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:04:21.528330   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:04:52.742775   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 01:05:03.936980   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-858631: (1m55.314727707s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-858631 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-858631 --wait=true -v=8 --alsologtostderr: (2m21.765195174s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-858631
--- PASS: TestMultiNode/serial/RestartKeepsNodes (257.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-858631 node delete m03: (1.242706246s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-858631 stop: (25.439240556s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status: exit status 7 (83.360549ms)

                                                
                                                
-- stdout --
	multinode-858631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-858631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr: exit status 7 (84.945579ms)

                                                
                                                
-- stdout --
	multinode-858631
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-858631-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 01:08:37.501522   25051 out.go:296] Setting OutFile to fd 1 ...
	I0224 01:08:37.501669   25051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:08:37.501676   25051 out.go:309] Setting ErrFile to fd 2...
	I0224 01:08:37.501681   25051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 01:08:37.501775   25051 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-4074/.minikube/bin
	I0224 01:08:37.501928   25051 out.go:303] Setting JSON to false
	I0224 01:08:37.501960   25051 mustload.go:65] Loading cluster: multinode-858631
	I0224 01:08:37.501991   25051 notify.go:220] Checking for updates...
	I0224 01:08:37.502344   25051 config.go:182] Loaded profile config "multinode-858631": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 01:08:37.502362   25051 status.go:255] checking status of multinode-858631 ...
	I0224 01:08:37.502749   25051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:08:37.502785   25051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:08:37.522397   25051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0224 01:08:37.522767   25051 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:08:37.523346   25051 main.go:141] libmachine: Using API Version  1
	I0224 01:08:37.523366   25051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:08:37.523704   25051 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:08:37.523854   25051 main.go:141] libmachine: (multinode-858631) Calling .GetState
	I0224 01:08:37.525292   25051 status.go:330] multinode-858631 host status = "Stopped" (err=<nil>)
	I0224 01:08:37.525305   25051 status.go:343] host is not running, skipping remaining checks
	I0224 01:08:37.525311   25051 status.go:257] multinode-858631 status: &{Name:multinode-858631 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 01:08:37.525336   25051 status.go:255] checking status of multinode-858631-m02 ...
	I0224 01:08:37.525630   25051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0224 01:08:37.525661   25051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0224 01:08:37.539067   25051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0224 01:08:37.539389   25051 main.go:141] libmachine: () Calling .GetVersion
	I0224 01:08:37.539871   25051 main.go:141] libmachine: Using API Version  1
	I0224 01:08:37.539892   25051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0224 01:08:37.541270   25051 main.go:141] libmachine: () Calling .GetMachineName
	I0224 01:08:37.541573   25051 main.go:141] libmachine: (multinode-858631-m02) Calling .GetState
	I0224 01:08:37.543122   25051 status.go:330] multinode-858631-m02 host status = "Stopped" (err=<nil>)
	I0224 01:08:37.543137   25051 status.go:343] host is not running, skipping remaining checks
	I0224 01:08:37.543142   25051 status.go:257] multinode-858631-m02 status: &{Name:multinode-858631-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-858631 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0224 01:08:40.892330   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:08:53.748696   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:09:52.742685   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-858631 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m45.809093623s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-858631 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.34s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (57.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-858631
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-858631-m02 --driver=kvm2 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-858631-m02 --driver=kvm2 : exit status 14 (62.239275ms)

                                                
                                                
-- stdout --
	* [multinode-858631-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-858631-m02' is duplicated with machine name 'multinode-858631-m02' in profile 'multinode-858631'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-858631-m03 --driver=kvm2 
E0224 01:11:15.788042   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-858631-m03 --driver=kvm2 : (55.821989186s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-858631
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-858631: exit status 80 (226.108141ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-858631
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-858631-m03 already exists in multinode-858631-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-858631-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-858631-m03: (1.017697526s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (57.17s)

                                                
                                    
x
+
TestPreload (165.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-648393 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-648393 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m24.99552404s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-648393 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-648393 -- docker pull gcr.io/k8s-minikube/busybox: (1.173501673s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-648393
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-648393: (13.09879154s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-648393 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0224 01:13:40.891951   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:13:53.748223   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-648393 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m4.955356129s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-648393 -- docker images
helpers_test.go:175: Cleaning up "test-preload-648393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-648393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-648393: (1.061090782s)
--- PASS: TestPreload (165.50s)

                                                
                                    
x
+
TestScheduledStopUnix (125.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-690163 --memory=2048 --driver=kvm2 
E0224 01:14:52.743107   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-690163 --memory=2048 --driver=kvm2 : (53.546758458s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-690163 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-690163 -n scheduled-stop-690163
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-690163 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-690163 --cancel-scheduled
E0224 01:15:16.890647   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-690163 -n scheduled-stop-690163
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-690163
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-690163 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-690163
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-690163: exit status 7 (63.69075ms)

                                                
                                                
-- stdout --
	scheduled-stop-690163
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-690163 -n scheduled-stop-690163
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-690163 -n scheduled-stop-690163: exit status 7 (66.508255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-690163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-690163
--- PASS: TestScheduledStopUnix (125.14s)

                                                
                                    
x
+
TestSkaffold (88.19s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe216144654 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-406990 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-406990 --memory=2600 --driver=kvm2 : (54.204645632s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe216144654 run --minikube-profile skaffold-406990 --kube-context skaffold-406990 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe216144654 run --minikube-profile skaffold-406990 --kube-context skaffold-406990 --status-check=true --port-forward=false --interactive=false: (22.351465919s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-69dcfb99bb-bcbxs" [b012ee73-f8b6-4d49-8fd0-37eba8be48d8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.017156366s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-9b6cf89df-cmfzl" [ffa3c463-1c16-4c69-813b-155916001542] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006832172s
helpers_test.go:175: Cleaning up "skaffold-406990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-406990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-406990: (1.079389795s)
--- PASS: TestSkaffold (88.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.6.2.1440040992.exe start -p running-upgrade-676643 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.6.2.1440040992.exe start -p running-upgrade-676643 --memory=2200 --vm-driver=kvm2 : (1m43.773039113s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-676643 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-676643 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m19.407870504s)
helpers_test.go:175: Cleaning up "running-upgrade-676643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-676643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-676643: (1.20068733s)
--- PASS: TestRunningBinaryUpgrade (184.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (238.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m39.144190651s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-707178
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-707178: (4.158688122s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-707178 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-707178 status --format={{.Host}}: exit status 7 (76.709385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (58.59045206s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-707178 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (101.617746ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-707178] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-707178
	    minikube start -p kubernetes-upgrade-707178 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7071782 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-707178 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-707178 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2 : (1m14.580766432s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-707178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-707178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-707178: (1.616038468s)
--- PASS: TestKubernetesUpgrade (238.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (194.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.6.2.3824097163.exe start -p stopped-upgrade-273695 --memory=2200 --vm-driver=kvm2 
E0224 01:18:40.892076   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:18:53.749248   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.6.2.3824097163.exe start -p stopped-upgrade-273695 --memory=2200 --vm-driver=kvm2 : (1m39.553383842s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.6.2.3824097163.exe -p stopped-upgrade-273695 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.6.2.3824097163.exe -p stopped-upgrade-273695 stop: (14.19270622s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-273695 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0224 01:19:52.742206   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-273695 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m20.908446446s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (194.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-273695
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/Start (95.06s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-966618 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0224 01:22:32.420586   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.425983   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.436271   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.456548   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.496849   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.577216   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:32.738070   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:33.058225   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:33.698481   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:34.978981   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:37.539226   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:22:42.660099   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-966618 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m35.062062827s)
--- PASS: TestPause/serial/Start (95.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (72.122735ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-394034] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-4074/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-4074/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (68.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394034 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394034 --driver=kvm2 : (1m8.622467525s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-394034 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (68.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0224 01:25:16.261998   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m22.059742234s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --driver=kvm2 : (18.290417831s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-394034 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-394034 status -o json: exit status 2 (262.513595ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-394034","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-394034
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-394034: (1.268681241s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394034 --no-kubernetes --driver=kvm2 : (33.509299455s)
--- PASS: TestNoKubernetes/serial/Start (33.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m20.797142562s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-394034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-394034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.798757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.407573652s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-394034
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-394034: (2.213233532s)
--- PASS: TestNoKubernetes/serial/Stop (2.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (50.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-394034 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-394034 --driver=kvm2 : (50.25802154s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (50.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-l8478" [a5776a93-7dc6-42f7-89b7-793bfd5b5865] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-l8478" [a5776a93-7dc6-42f7-89b7-793bfd5b5865] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.008173002s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0224 01:27:00.427343   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.432995   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.443285   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.463555   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.503837   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.584191   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:00.748398   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:01.069453   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:01.710287   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:02.990984   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m59.623568236s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (133.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0224 01:27:05.552164   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:10.673036   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m13.639593227s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (133.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-394034 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-394034 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.057764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (134.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p false-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0224 01:27:20.913746   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:27:32.420054   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:27:41.394663   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p false-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m14.590601235s)
--- PASS: TestNetworkPlugins/group/false/Start (134.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rmsq4" [40f96751-e84c-4d6c-8b94-5dce65a32639] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.041562904s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2djzr" [e754de98-01e0-4f3d-9617-1cdeeabd3275] Pending
helpers_test.go:344: "netcat-694fc96674-2djzr" [e754de98-01e0-4f3d-9617-1cdeeabd3275] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 01:27:55.788829   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-2djzr" [e754de98-01e0-4f3d-9617-1cdeeabd3275] Running
E0224 01:28:00.102715   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.015333532s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (98.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0224 01:28:40.891880   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:28:53.748888   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m38.655172754s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (98.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v6p7r" [490dbe73-d463-45bb-8458-1f3784c191de] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02319337s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (18.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2lw8k" [a5729dcc-9b14-4ef6-bb9f-c0477c4c6887] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-2lw8k" [a5729dcc-9b14-4ef6-bb9f-c0477c4c6887] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 18.009500108s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (18.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-79m56" [ec0740ef-8457-43af-aab2-5aa5421e0639] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-79m56" [ec0740ef-8457-43af-aab2-5aa5421e0639] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.016961378s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-5d6zx" [5e94d79f-7a39-4d2a-8536-76637fd267d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-5d6zx" [5e94d79f-7a39-4d2a-8536-76637fd267d9] Running
E0224 01:29:44.278415   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.008470037s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m32.546819419s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m41.281561932s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gpzjh" [cbcb261e-ecfa-4231-9f14-3d33c4257aba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-gpzjh" [cbcb261e-ecfa-4231-9f14-3d33c4257aba] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011303232s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (117.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-537815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m57.439930869s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (117.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (196.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-505768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-505768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (3m16.856483071s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (196.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5pb4m" [98dc229b-d172-4a15-850c-bf8490d0608a] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.53695328s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rkmn7" [ae5424ec-8df9-4326-be72-f8d58c4a6888] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-rkmn7" [ae5424ec-8df9-4326-be72-f8d58c4a6888] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.011920689s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-m2fk4" [f312e686-8e47-4aeb-b53b-ae13fc7a6a93] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 01:31:33.050637   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.055913   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.066163   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.086429   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.126775   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.207278   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.367676   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:33.687828   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:31:34.328962   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-m2fk4" [f312e686-8e47-4aeb-b53b-ae13fc7a6a93] Running
E0224 01:31:43.290934   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.010804298s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0224 01:31:35.609889   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-884053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:31:56.891522   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:32:00.429599   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-884053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (1m40.08328437s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-537815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-537815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4fxrc" [18e57dee-848a-46a9-a1f1-9e20af4b86b8] Pending
helpers_test.go:344: "netcat-694fc96674-4fxrc" [18e57dee-848a-46a9-a1f1-9e20af4b86b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4fxrc" [18e57dee-848a-46a9-a1f1-9e20af4b86b8] Running
E0224 01:32:14.012424   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.010302765s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (100.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070599 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070599 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (1m40.539906402s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (100.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-537815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-537815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0224 01:39:46.837057   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:39:49.031654   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:39:52.741814   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 01:40:03.154208   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:40:03.728870   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:40:30.838688   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-597952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:32:45.634773   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.640039   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.650278   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.670548   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.710882   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.791209   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:45.951625   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:46.272814   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:46.913269   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:48.193665   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:50.754701   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:32:54.973430   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:32:55.875586   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:33:06.116196   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:33:26.596827   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-597952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (1m59.282092119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-884053 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [be1eff98-e742-43e0-8393-7d05b7aaab25] Pending
helpers_test.go:344: "busybox" [be1eff98-e742-43e0-8393-7d05b7aaab25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [be1eff98-e742-43e0-8393-7d05b7aaab25] Running
E0224 01:33:40.892380   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.027808824s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-884053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070599 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d48d75de-5c3e-4687-ad75-f88b845235cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d48d75de-5c3e-4687-ad75-f88b845235cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.035849119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-070599 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-884053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-884053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-884053 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-884053 --alsologtostderr -v=3: (13.165124589s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-505768 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5a767da-54b5-448e-a9d4-c1e6134c504d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d5a767da-54b5-448e-a9d4-c1e6134c504d] Running
E0224 01:33:53.748867   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.028719252s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-505768 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-070599 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-070599 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-070599 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-070599 --alsologtostderr -v=3: (13.124334741s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-505768 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-505768 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-505768 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-505768 --alsologtostderr -v=3: (13.153090859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884053 -n no-preload-884053
E0224 01:33:59.766987   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:33:59.772494   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:33:59.782760   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:33:59.803075   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884053 -n no-preload-884053: exit status 7 (71.500892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-884053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0224 01:33:59.843713   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:33:59.924806   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-884053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:34:00.085489   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:00.406179   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:01.047148   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:02.328115   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:04.889306   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:07.557242   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-884053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.26.1: (5m14.51649089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-884053 -n no-preload-884053
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070599 -n embed-certs-070599
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070599 -n embed-certs-070599: exit status 7 (66.008002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-070599 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (319.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-070599 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:34:10.010297   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-070599 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.26.1: (5m19.389272017s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-070599 -n embed-certs-070599
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (319.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505768 -n old-k8s-version-505768
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505768 -n old-k8s-version-505768: exit status 7 (65.939738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-505768 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (475.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-505768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0224 01:34:16.894382   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:34:19.153722   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.158992   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.169241   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.189552   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.229858   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.310170   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.471089   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:19.791862   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:20.250923   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:20.432543   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:21.713051   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:24.273432   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:29.394524   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:36.044871   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.050174   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.060442   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.080703   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.120967   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.201321   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.361931   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:36.682862   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:34:37.323585   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-505768 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m55.684830098s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505768 -n old-k8s-version-505768
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (475.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-597952 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [28882141-a8d5-498e-b853-b5a9760c9fb1] Pending
E0224 01:34:38.603860   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
helpers_test.go:344: "busybox" [28882141-a8d5-498e-b853-b5a9760c9fb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0224 01:34:39.634678   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:34:40.731861   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:34:41.164089   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
helpers_test.go:344: "busybox" [28882141-a8d5-498e-b853-b5a9760c9fb1] Running
E0224 01:34:46.285202   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.025748229s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-597952 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-597952 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-597952 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-597952 --alsologtostderr -v=3
E0224 01:34:52.742158   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/functional-081341/client.crt: no such file or directory
E0224 01:34:56.526307   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:35:00.114942   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-597952 --alsologtostderr -v=3: (13.136724395s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952: exit status 7 (90.408084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-597952 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-597952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:35:03.154811   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.160102   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.170418   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.190716   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.231053   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.311473   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.471877   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:03.792766   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:04.433541   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:05.714286   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:08.274891   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:13.396017   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:17.006589   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:35:21.692456   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:35:23.636414   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:29.477855   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:35:41.075446   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:35:44.117265   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:35:57.967698   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:36:16.213519   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.218849   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.229186   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.249495   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.289778   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.370398   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.530844   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:16.851173   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:17.491494   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:18.772574   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:21.332728   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:25.078043   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:36:26.453736   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:32.388374   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.393629   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.403881   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.424279   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.464576   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.544858   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:32.705444   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:33.026469   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:33.050713   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:36:33.667597   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:34.947997   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:36.694937   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:36:37.508250   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:42.629325   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:43.612877   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:36:52.870170   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:36:57.176023   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:37:00.427363   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/gvisor-694714/client.crt: no such file or directory
E0224 01:37:00.734825   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:37:02.996521   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
E0224 01:37:05.187991   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.193261   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.203502   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.223810   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.264173   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.345183   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.505666   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:05.826244   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:06.467215   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:07.748045   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:10.308574   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:13.350393   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:37:15.429401   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:19.887934   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
E0224 01:37:25.670162   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:32.420470   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:37:38.137079   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:37:45.634732   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:37:46.150684   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:37:46.998522   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/enable-default-cni-537815/client.crt: no such file or directory
E0224 01:37:54.310797   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:38:13.318685   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kindnet-537815/client.crt: no such file or directory
E0224 01:38:23.938236   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:38:27.111411   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/kubenet-537815/client.crt: no such file or directory
E0224 01:38:40.891928   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/addons-031105/client.crt: no such file or directory
E0224 01:38:53.749142   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/ingress-addon-legacy-443815/client.crt: no such file or directory
E0224 01:38:55.463444   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/skaffold-406990/client.crt: no such file or directory
E0224 01:38:59.766926   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
E0224 01:39:00.057310   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-597952 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.26.1: (5m37.472228606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (337.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fp7rf" [56e4b9c6-1f39-4f99-9183-c88a090cbf98] Running
E0224 01:39:16.231648   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:39:19.154066   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/custom-flannel-537815/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016940126s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fp7rf" [56e4b9c6-1f39-4f99-9183-c88a090cbf98] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008602377s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-884053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-884053 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-884053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884053 -n no-preload-884053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884053 -n no-preload-884053: exit status 2 (261.537672ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-884053 -n no-preload-884053
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-884053 -n no-preload-884053: exit status 2 (249.477204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-884053 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-884053 -n no-preload-884053
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-884053 -n no-preload-884053
E0224 01:39:27.453642   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/calico-537815/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-r4scr" [3820664e-3014-4058-a16c-836c68e1ae75] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017590221s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (77.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-784709 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-784709 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (1m17.222828106s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (77.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-r4scr" [3820664e-3014-4058-a16c-836c68e1ae75] Running
E0224 01:39:36.044735   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/false-537815/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007517486s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-070599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-070599 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-070599 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070599 -n embed-certs-070599
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070599 -n embed-certs-070599: exit status 2 (254.721695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070599 -n embed-certs-070599
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070599 -n embed-certs-070599: exit status 2 (266.110468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-070599 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-070599 -n embed-certs-070599
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-070599 -n embed-certs-070599
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tl8mr" [1b6b6caa-e1c3-431a-ac7c-a87538a48110] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018690451s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tl8mr" [1b6b6caa-e1c3-431a-ac7c-a87538a48110] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00861555s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-597952 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-784709 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-784709 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048059573s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-784709 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-784709 --alsologtostderr -v=3: (13.119845428s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-597952 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-597952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952: exit status 2 (247.995607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952: exit status 2 (242.406119ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-597952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-597952 -n default-k8s-diff-port-597952
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784709 -n newest-cni-784709
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784709 -n newest-cni-784709: exit status 7 (77.188378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-784709 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-784709 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1
E0224 01:41:16.213203   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
E0224 01:41:32.388448   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/bridge-537815/client.crt: no such file or directory
E0224 01:41:33.050530   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/auto-537815/client.crt: no such file or directory
E0224 01:41:43.897798   11131 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-4074/.minikube/profiles/flannel-537815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-784709 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.26.1: (46.258392278s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-784709 -n newest-cni-784709
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-784709 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-784709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784709 -n newest-cni-784709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784709 -n newest-cni-784709: exit status 2 (242.022301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784709 -n newest-cni-784709
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784709 -n newest-cni-784709: exit status 2 (240.604127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-784709 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-784709 -n newest-cni-784709
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-784709 -n newest-cni-784709
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-p6rdh" [46a2942b-fe2c-4717-99f1-cae3a81ea79a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013858655s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-p6rdh" [46a2942b-fe2c-4717-99f1-cae3a81ea79a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006879712s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-505768 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-505768 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-505768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505768 -n old-k8s-version-505768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505768 -n old-k8s-version-505768: exit status 2 (241.524218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-505768 -n old-k8s-version-505768
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-505768 -n old-k8s-version-505768: exit status 2 (238.341245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-505768 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505768 -n old-k8s-version-505768
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-505768 -n old-k8s-version-505768
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.38s)

                                                
                                    

Test skip (29/300)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-537815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-537815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-537815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-537815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-537815"

                                                
                                                
----------------------- debugLogs end: cilium-537815 [took: 3.765976319s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-537815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-537815
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-337203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-337203
--- SKIP: TestStartStop/group/disable-driver-mounts (0.52s)

                                                
                                    
Copied to clipboard